✘ 'DummyVecEnv' object has no attribute 'video_recorder' ✘ We are unable to generate a replay of your agent, the package_to_hub process continues
Video preview not generated for pushed model: https://huggingface.co/Samini10/dqn-SpaceInvadersNoFrameskip-v4
Below is complete log with error highlighted in bold):
2025-07-30 06:10:04.207856: E external/local_xla/xla/stream_executor/cuda/cuda_fft.cc:477] Unable to register cuFFT factory: Attempting to register factory for plugin cuFFT when one has already been registered WARNING: All log messages before absl::InitializeLog() is called are written to STDERR E0000 00:00:1753855804.255054 166155 cuda_dnn.cc:8310] Unable to register cuDNN factory: Attempting to register factory for plugin cuDNN when one has already been registered E0000 00:00:1753855804.267419 166155 cuda_blas.cc:1418] Unable to register cuBLAS factory: Attempting to register factory for plugin cuBLAS when one has already been registered 2025-07-30 06:10:04.306574: I tensorflow/core/platform/cpu_feature_guard.cc:210] This TensorFlow binary is optimized to use available CPU instructions in performance-critical operations. To enable the following instructions: AVX2 FMA, in other operations, rebuild TensorFlow with the appropriate compiler flags. Loading latest experiment, id=1 Loading logs/dqn/SpaceInvadersNoFrameskip-v4_1/SpaceInvadersNoFrameskip-v4.zip A.L.E: Arcade Learning Environment (version 0.11.2+ecc1138) [Powered by Stella] Stacking 4 frames Wrapping the env in a VecTransposeImage. Uploading to Samini10/dqn-SpaceInvadersNoFrameskip-v4, make sure to have the rights ℹ This function will save, evaluate, generate a video of your agent, create a model card and push everything to the hub. It might take up to some minutes if video generation is activated. This is a work in progress: if you encounter a bug, please open an issue. Fetching 1 files: 0% 0/1 [00:00<?, ?it/s] .gitattributes: 1.52kB [00:00, 299kB/s] Fetching 1 files: 100% 1/1 [00:00<00:00, 2.47it/s] Saving model to: hub/dqn-SpaceInvadersNoFrameskip-v4/dqn-SpaceInvadersNoFrameskip-v4 Saving video to /tmp/tmpiadiajd2/-step-0-to-step-1000.mp4 /usr/local/lib/python3.11/dist-packages/moviepy/config_defaults.py:1: DeprecationWarning: invalid escape sequence '\P' """ Moviepy - Building video /tmp/tmpiadiajd2/-step-0-to-step-1000.mp4. Moviepy - Writing video /tmp/tmpiadiajd2/-step-0-to-step-1000.mp4
Moviepy - Done !
Moviepy - video ready /tmp/tmpiadiajd2/-step-0-to-step-1000.mp4
✘ 'DummyVecEnv' object has no attribute 'video_recorder'
✘ We are unable to generate a replay of your agent, the package_to_hub
process continues
✘ Please open an issue at
https://github.com/huggingface/huggingface_sb3/issues
ℹ Pushing repo dqn-SpaceInvadersNoFrameskip-v4 to the Hugging Face
Hub
Processing Files (0 / 0) : | | 0.00B / 0.00B
New Data Upload : | | 0.00B / 0.00B
...oFrameskip-v4/pytorch_variables.pth: 100% 864/864 [00:00<?, ?B/s]
Processing Files (1 / 1) : 0% 864/54.3M [00:01<24:00:53, 628B/s, 865B/s ]
...oFrameskip-v4/pytorch_variables.pth: 100% 864/864 [00:00<?, ?B/s]
...NoFrameskip-v4/policy.optimizer.pth: 1% 199k/13.5M [00:00<?, ?B/s]
...ceInvadersNoFrameskip-v4/policy.pth: 1% 199k/13.5M [00:00<?, ?B/s]
...dqn-SpaceInvadersNoFrameskip-v4.zip: 1% 402k/27.2M [00:00<?, ?B/s]
...Frameskip-v4/train_eval_metrics.zip: 1% 560/37.9k [00:00<?, ?B/s]
...oFrameskip-v4/pytorch_variables.pth: 100% 864/864 [00:00<?, ?B/s]
...NoFrameskip-v4/policy.optimizer.pth: 1% 199k/13.5M [00:00<?, ?B/s]
...ceInvadersNoFrameskip-v4/policy.pth: 1% 199k/13.5M [00:00<?, ?B/s]
...dqn-SpaceInvadersNoFrameskip-v4.zip: 1% 402k/27.2M [00:00<?, ?B/s]
Processing Files (1 / 5) : 1% 802k/54.3M [00:01<01:31, 587kB/s, 573kB/s ] New Data Upload : 1% 790k/53.5M [00:01<01:58, 444kB/s, 564kB/s ]
...oFrameskip-v4/pytorch_variables.pth: 100% 864/864 [00:00<?, ?B/s]
...NoFrameskip-v4/policy.optimizer.pth: 4% 598k/13.5M [00:00<00:06, 2.00MB/s]
...ceInvadersNoFrameskip-v4/policy.pth: 4% 598k/13.5M [00:00<00:06, 1.99MB/s]
...dqn-SpaceInvadersNoFrameskip-v4.zip: 4% 1.21M/27.2M [00:00<00:06, 4.06MB/s]
Processing Files (1 / 5) : 4% 2.41M/54.3M [00:01<00:27, 1.87MB/s, 1.50MB/s ] New Data Upload : 4% 2.37M/53.5M [00:01<00:34, 1.48MB/s, 1.48MB/s ]
...oFrameskip-v4/pytorch_variables.pth: 100% 864/864 [00:00<?, ?B/s]
...NoFrameskip-v4/policy.optimizer.pth: 19% 2.59M/13.5M [00:00<00:01, 5.99MB/s]
...ceInvadersNoFrameskip-v4/policy.pth: 19% 2.59M/13.5M [00:00<00:01, 5.99MB/s]
...dqn-SpaceInvadersNoFrameskip-v4.zip: 19% 5.23M/27.2M [00:00<00:01, 12.1MB/s]
Processing Files (1 / 5) : 19% 10.4M/54.3M [00:02<00:04, 9.46MB/s, 5.79MB/s ] New Data Upload : 19% 10.3M/53.5M [00:02<00:05, 7.76MB/s, 5.71MB/s ]
...oFrameskip-v4/pytorch_variables.pth: 100% 864/864 [00:00<?, ?B/s]
...NoFrameskip-v4/policy.optimizer.pth: 49% 6.58M/13.5M [00:00<00:00, 10.6MB/s]
...ceInvadersNoFrameskip-v4/policy.pth: 49% 6.58M/13.5M [00:00<00:00, 10.7MB/s]
...dqn-SpaceInvadersNoFrameskip-v4.zip: 49% 13.3M/27.2M [00:00<00:00, 21.5MB/s]
Processing Files (1 / 5) : 49% 26.4M/54.3M [00:02<00:01, 25.1MB/s, 13.2MB/s ] New Data Upload : 49% 26.1M/53.5M [00:02<00:01, 21.4MB/s, 13.0MB/s ]
...oFrameskip-v4/pytorch_variables.pth: 100% 864/864 [00:01<?, ?B/s]
...NoFrameskip-v4/policy.optimizer.pth: 78% 10.6M/13.5M [00:00<00:00, 13.0MB/s]
...ceInvadersNoFrameskip-v4/policy.pth: 78% 10.6M/13.5M [00:00<00:00, 13.0MB/s]
...dqn-SpaceInvadersNoFrameskip-v4.zip: 78% 21.3M/27.2M [00:00<00:00, 26.2MB/s]
Processing Files (1 / 5) : 78% 42.5M/54.3M [00:02<00:00, 38.3MB/s, 19.3MB/s ] New Data Upload : 78% 41.9M/53.5M [00:02<00:00, 33.7MB/s, 19.0MB/s ]
...oFrameskip-v4/pytorch_variables.pth: 100% 864/864 [00:01<?, ?B/s]
...NoFrameskip-v4/policy.optimizer.pth: 99% 13.4M/13.5M [00:00<00:00, 13.2MB/s]
...ceInvadersNoFrameskip-v4/policy.pth: 99% 13.4M/13.5M [00:00<00:00, 13.2MB/s]
...dqn-SpaceInvadersNoFrameskip-v4.zip: 99% 26.9M/27.2M [00:00<00:00, 26.6MB/s]
Processing Files (1 / 5) : 99% 53.7M/54.3M [00:02<00:00, 42.8MB/s, 22.4MB/s ] New Data Upload : 99% 52.9M/53.5M [00:02<00:00, 38.8MB/s, 22.1MB/s ]
...oFrameskip-v4/pytorch_variables.pth: 100% 864/864 [00:01<?, ?B/s]
...NoFrameskip-v4/policy.optimizer.pth: 99% 13.4M/13.5M [00:01<00:00, 11.0MB/s]
...ceInvadersNoFrameskip-v4/policy.pth: 99% 13.4M/13.5M [00:01<00:00, 11.0MB/s]
...dqn-SpaceInvadersNoFrameskip-v4.zip: 99% 26.9M/27.2M [00:01<00:00, 22.2MB/s]
...Frameskip-v4/train_eval_metrics.zip: 99% 37.5k/37.9k [00:01<00:00, 30.9kB/s]
...oFrameskip-v4/pytorch_variables.pth: 100% 864/864 [00:01<?, ?B/s]
...NoFrameskip-v4/policy.optimizer.pth: 99% 13.4M/13.5M [00:01<00:00, 9.41MB/s]
...ceInvadersNoFrameskip-v4/policy.pth: 99% 13.4M/13.5M [00:01<00:00, 9.41MB/s]
...dqn-SpaceInvadersNoFrameskip-v4.zip: 99% 26.9M/27.2M [00:01<00:00, 19.0MB/s]
...Frameskip-v4/train_eval_metrics.zip: 99% 37.5k/37.9k [00:01<00:00, 26.5kB/s]
...oFrameskip-v4/pytorch_variables.pth: 100% 864/864 [00:02<?, ?B/s]
...NoFrameskip-v4/policy.optimizer.pth: 99% 13.4M/13.5M [00:01<00:00, 8.23MB/s]
...ceInvadersNoFrameskip-v4/policy.pth: 99% 13.4M/13.5M [00:01<00:00, 8.23MB/s]
...dqn-SpaceInvadersNoFrameskip-v4.zip: 99% 26.9M/27.2M [00:01<00:00, 16.6MB/s]
...Frameskip-v4/train_eval_metrics.zip: 99% 37.5k/37.9k [00:01<00:00, 23.1kB/s]
...oFrameskip-v4/pytorch_variables.pth: 100% 864/864 [00:02<?, ?B/s]
...NoFrameskip-v4/policy.optimizer.pth: 100% 13.5M/13.5M [00:01<00:00, 7.39MB/s]
...ceInvadersNoFrameskip-v4/policy.pth: 100% 13.5M/13.5M [00:01<00:00, 7.39MB/s]
...dqn-SpaceInvadersNoFrameskip-v4.zip: 100% 27.2M/27.2M [00:01<00:00, 14.9MB/s]
Processing Files (5 / 5) : 100% 54.3M/54.3M [00:03<00:00, 17.8MB/s, 17.0MB/s ] New Data Upload : 100% 53.5M/53.5M [00:03<00:00, 17.0MB/s, 16.7MB/s ]
...oFrameskip-v4/pytorch_variables.pth: 100% 864/864 [00:02<?, ?B/s]
...NoFrameskip-v4/policy.optimizer.pth: 100% 13.5M/13.5M [00:01<00:00, 6.65MB/s]
...ceInvadersNoFrameskip-v4/policy.pth: 100% 13.5M/13.5M [00:01<00:00, 6.66MB/s]
...dqn-SpaceInvadersNoFrameskip-v4.zip: 100% 27.2M/27.2M [00:01<00:00, 13.4MB/s]
...Frameskip-v4/train_eval_metrics.zip: 100% 37.9k/37.9k [00:01<00:00, 18.7kB/s]
...oFrameskip-v4/pytorch_variables.pth: 100% 864/864 [00:02<?, ?B/s]
...NoFrameskip-v4/policy.optimizer.pth: 100% 13.5M/13.5M [00:02<00:00, 6.05MB/s]
...ceInvadersNoFrameskip-v4/policy.pth: 100% 13.5M/13.5M [00:02<00:00, 6.05MB/s]
...dqn-SpaceInvadersNoFrameskip-v4.zip: 100% 27.2M/27.2M [00:02<00:00, 12.2MB/s]
...Frameskip-v4/train_eval_metrics.zip: 100% 37.9k/37.9k [00:02<00:00, 17.0kB/s]
...oFrameskip-v4/pytorch_variables.pth: 100% 864/864 [00:02<?, ?B/s]
...NoFrameskip-v4/policy.optimizer.pth: 100% 13.5M/13.5M [00:02<00:00, 5.87MB/s]
...ceInvadersNoFrameskip-v4/policy.pth: 100% 13.5M/13.5M [00:02<00:00, 5.87MB/s]
...dqn-SpaceInvadersNoFrameskip-v4.zip: 100% 27.2M/27.2M [00:02<00:00, 11.9MB/s]
...Frameskip-v4/train_eval_metrics.zip: 100% 37.9k/37.9k [00:02<00:00, 16.5kB/s]
...oFrameskip-v4/pytorch_variables.pth: 100% 864/864 [00:02<?, ?B/s]
...NoFrameskip-v4/policy.optimizer.pth: 100% 13.5M/13.5M [00:02<00:00, 5.55MB/s]
...ceInvadersNoFrameskip-v4/policy.pth: 100% 13.5M/13.5M [00:02<00:00, 5.55MB/s]
...dqn-SpaceInvadersNoFrameskip-v4.zip: 100% 27.2M/27.2M [00:02<00:00, 11.2MB/s]
Processing Files (5 / 5) : 100% 54.3M/54.3M [00:04<00:00, 13.0MB/s, 14.3MB/s ] New Data Upload : 100% 53.5M/53.5M [00:04<00:00, 12.8MB/s, 14.1MB/s ] ...oFrameskip-v4/pytorch_variables.pth: 100% 864/864 [00:02<?, ?B/s] ...NoFrameskip-v4/policy.optimizer.pth: 100% 13.5M/13.5M [00:02<00:00, 5.54MB/s] ...ceInvadersNoFrameskip-v4/policy.pth: 100% 13.5M/13.5M [00:02<00:00, 5.54MB/s] ...dqn-SpaceInvadersNoFrameskip-v4.zip: 100% 27.2M/27.2M [00:02<00:00, 11.2MB/s] ...Frameskip-v4/train_eval_metrics.zip: 100% 37.9k/37.9k [00:02<00:00, 15.6kB/s] ℹ Your model is pushed to the hub. You can view your model here: https://huggingface.co/Samini10/dqn-SpaceInvadersNoFrameskip-v4
same here
# PLACE the variables you've just defined two cells above
# Define the name of the environment
env_id = "LunarLander-v3"
# TODO: Define the model architecture we used
model_architecture = "PPO"
## Define a repo_id
## repo_id is the id of the model repository from the Hugging Face Hub (repo_id = {organization}/{repo_name} for instance ThomasSimonini/ppo-LunarLander-v2
## CHANGE WITH YOUR REPO ID
repo_id = "tyoc213/ppo-LunarLander-v3-test" # Change with your repo id, you can't push with mine 😄
## Define the commit message
commit_message = "Upload PPO LunarLander-v3 trained agent"
# Create the evaluation env and set the render_mode="rgb_array"
eval_env = DummyVecEnv([lambda: gym.make(env_id, render_mode="rgb_array")])
# PLACE the package_to_hub function you've just filled here
package_to_hub(model=model, # Our trained model
model_name=model_name, # The name of our trained model
model_architecture=model_architecture, # The model architecture we used: in our case PPO
env_id=env_id, # Name of the environment
eval_env=eval_env, # Evaluation Environment
repo_id=repo_id, # id of the model repository from the Hugging Face Hub (repo_id = {organization}/{repo_name} for instance ThomasSimonini/ppo-LunarLander-v2
commit_message=commit_message)
ℹ This function will save, evaluate, generate a video of your agent,
create a model card and push everything to the hub. It might take up to 1min.
This is a work in progress: if you encounter a bug, please open an issue.
/bergcode/gym-deep-rl-course/.venv/lib/python3.12/site-packages/stable_baselines3/common/evaluation.py:70: UserWarning: Evaluation environment is not wrapped with a ``Monitor`` wrapper. This may result in reporting modified episode lengths and rewards, if other wrappers happen to modify these. Consider wrapping environment first with ``Monitor`` wrapper.
warnings.warn(
Saving video to /tmp/tmpzn4t81te/-step-0-to-step-1000.mp4
MoviePy - Building video /tmp/tmpzn4t81te/-step-0-to-step-1000.mp4.
MoviePy - Writing video /tmp/tmpzn4t81te/-step-0-to-step-1000.mp4
MoviePy - Done !
MoviePy - video ready /tmp/tmpzn4t81te/-step-0-to-step-1000.mp4
✘ 'DummyVecEnv' object has no attribute 'video_recorder'
✘ We are unable to generate a replay of your agent, the package_to_hub
process continues
✘ Please open an issue at
https://github.com/huggingface/huggingface_sb3/issues
ℹ Pushing repo tyoc213/ppo-LunarLander-v3-test to the Hugging Face
Hub
uploaded repo https://huggingface.co/tyoc213/ppo-LunarLander-v3-test
@tyoc213 This should be resolved by https://github.com/huggingface/huggingface_sb3/pull/47/files.
The fix in pr #47 is from February. I wonder if it will ever be merged...
This problem still exists, and I'm facing the same issue.
I am still facing the same issue ` This function will save, evaluate, generate a video of your agent, create a model card and push everything to the hub. It might take up to 1min. This is a work in progress: if you encounter a bug, please open an issue. Saving video to /tmp/tmp9kctmjob/-step-0-to-step-1000.mp4 /usr/local/lib/python3.12/dist-packages/moviepy/config_defaults.py:47: SyntaxWarning: invalid escape sequence '\P' IMAGEMAGICK_BINARY = r"C:\Program Files\ImageMagick-6.8.8-Q16\magick.exe" Moviepy - Building video /tmp/tmp9kctmjob/-step-0-to-step-1000.mp4. Moviepy - Writing video /tmp/tmp9kctmjob/-step-0-to-step-1000.mp4
/usr/local/lib/python3.12/dist-packages/jupyter_client/session.py:203: DeprecationWarning: datetime.datetime.utcnow() is deprecated and scheduled for removal in a future version. Use timezone-aware objects to represent datetimes in UTC: datetime.datetime.now(datetime.UTC). return datetime.utcnow().replace(tzinfo=utc) Moviepy - Done ! Moviepy - video ready /tmp/tmp9kctmjob/-step-0-to-step-1000.mp4 ✘ 'DummyVecEnv' object has no attribute 'video_recorder' ✘ We are unable to generate a replay of your agent, the package_to_hub process continues ✘ Please open an issue at https://github.com/huggingface/huggingface_sb3/issues ℹ Pushing repo BossBaby07/ppo-LunarLander-v3 to the Hugging Face Hub`
A couple of workarounds:
Patch Locally:
-
You can patch the package locally (not great but it works), just find the huggingface_sb3 package in your venv and edit the push_to_hub.py file (for me it was on line 159)
-
Make sure your model is saved (assuming you are already at push_to_hub you should hopefully have a local copy), and then restart jupyter.
-
Load the model using PPO.load (or whatever other package you are using)
-
Verify the results match what you expected them to be (this verifies that you loaded the model correctly)
-
Use push_to_hub (should work)
Install from specific branch
If you are using the colab notebook or are unfamiliar with patching local packages (solution above) try adding a cell with this:
pip install git+https://github.com/Benjy-D/huggingface_sb3
Which should update the package with the fix.
You'll still need to restart your notebook, so unless you want to go through the training loop again, make sure the model is saved (either to hub or local), and load it, verify the results, and then re-push.
Hope that helps 😄
Installing that fork fixed my issue. Thanks!
It worked for me too! Just to document it, in the collab the only change was to use when we import the first libraries: !apt install swig cmake !pip install -r https://raw.githubusercontent.com/huggingface/deep-rl-class/main/notebooks/unit1/requirements-unit1.txt !pip install git+https://github.com/Benjy-D/huggingface_sb3
(I tried a lot of other ways of loading the libraries, changing versions, but this end up working directly)
@tyoc213 This should be resolved by https://github.com/huggingface/huggingface_sb3/pull/47/files.
It worked for me. Thank you so much.
On installing the fork I run into this issue
Traceback (most recent call last): File "<frozen runpy>", line 198, in _run_module_as_main File "<frozen runpy>", line 88, in _run_code File "/usr/local/lib/python3.12/dist-packages/rl_zoo3/push_to_hub.py", line 16, in <module> from huggingface_sb3.push_to_hub import _evaluate_agent, _generate_replay, generate_metadata ImportError: cannot import name 'generate_metadata' from 'huggingface_sb3.push_to_hub' (/usr/local/lib/python3.12/dist-packages/huggingface_sb3/push_to_hub.py). Did you mean: '_generate_metadata'?