stable-baselines3
stable-baselines3 copied to clipboard
The Atari breakout platform doesn't move and just sticks to right side.
Hello,
I was working on my atari breakout model and the project is finally completed but doesn't work nicely as shown in the tutorial. Here are two pics of how its like:
Here is my code:
#Import Dependencies
import gym
from stable_baselines3 import A2C
from stable_baselines3.common.vec_env import VecFrameStack
from stable_baselines3.common.evaluation import evaluate_policy
from stable_baselines3.common.env_util import make_atari_env
from stable_baselines3.common.env_util import make_vec_env
import os
from gym.utils import play
from stable_baselines3.ddpg.policies import CnnPolicy
from ale_py import ALEInterface
from ale_py.roms import Breakout
ale = ALEInterface()
ale.loadROM(Breakout)
env = make_atari_env('Breakout-v0', seed=0)
log_path = os.path.join('Training', 'Logs')
model = A2C('CnnPolicy', env, verbose=1, tensorboard_log=log_path)
a2c_path = os.path.join('Training','Logs', 'A2C_300k_model')
model.save(a2c_path)
env.observation_space
del model
model = A2C.load(a2c_path, env)
evaluate_policy(model, env, n_eval_episodes=100, render=True)
I am also only getting the average score of 1 or something unlike in the video in which the code gives him 4-5 point average. Please help. The tutorial I am following: https://www.youtube.com/watch?v=Mut_u40Sqz4&t=7103s&ab_channel=NicholasRenotte. Thanks
Hello,
it doesn't seem that you are training the agent at all.
We also recommend using BreakoutNoFrameskip-v4
instead of Breakout-v0
.
If you want to have a working agent, please use the RL Zoo (cf. doc), you can find pre-trained agent, hyperparameters and instructions to reproduce the experiment on the hugging face page: https://huggingface.co/sb3/a2c-BreakoutNoFrameskip-v4
Hi, Sorry
Hello, it doesn't seem that you are training the agent at all. We also recommend using
BreakoutNoFrameskip-v4
instead ofBreakout-v0
.If you want to have a working agent, please use the RL Zoo (cf. doc), you can find pre-trained agent, hyperparameters and instructions to reproduce the experiment on the hugging face page: https://huggingface.co/sb3/a2c-BreakoutNoFrameskip-v4
Hi, Sorry for the late response but actually the reason I am not training is because my machine which I am using isn't powerful enough to train it for 100k steps so I was using a pretrained model from the tutorial I was following but it's not working. I even tried to use the no frame skip version of the breakout env but it still didn't work. Please help. Thanks.
I'm not sure what you tried, but if you follow the instruction on the page:
# Download model and save it into the logs/ folder
python -m utils.load_from_hub --algo a2c --env BreakoutNoFrameskip-v4 -orga sb3 -f logs/
python enjoy.py --algo a2c --env BreakoutNoFrameskip-v4 -f logs/
it does work... (you may need pickle5 package or python 3.8+)
I'm not sure what you tried, but if you follow the instruction on the page:
# Download model and save it into the logs/ folder python -m utils.load_from_hub --algo a2c --env BreakoutNoFrameskip-v4 -orga sb3 -f logs/ python enjoy.py --algo a2c --env BreakoutNoFrameskip-v4 -f logs/
it does work... (you may need pickle5 package or python 3.8+)
Hi, I actually tried it but whenever I run that command I get this error: `C:\Users\shiva\AppData\Local\Programs\Python\Python310\python.exe: Error while finding module specification for 'utils.load_from_hub' (ModuleNotFoundError: No module named 'utils')
` Also m question is why doesn't the pretrained model I installed and used working? when it's working fine in the tutorial?
I actually tried it but whenever I run that command I get this error: `C:\Users\shiva\AppData\Local\Programs\Python\Python310\python.exe: Error while finding module specification for 'utils.load_from_hub' (ModuleNotFoundError: No module named 'utils')
you need to follow the install instructions from the RL Zoo (links is in the page) and be in the RL Zoo folder.
I actually tried it but whenever I run that command I get this error: `C:\Users\shiva\AppData\Local\Programs\Python\Python310\python.exe: Error while finding module specification for 'utils.load_from_hub' (ModuleNotFoundError: No module named 'utils')
you need to follow the install instructions from the RL Zoo (links is in the page) and be in the RL Zoo folder.
Yes, I will try that but my question is why doesn't the model that I installed working as shown in the tutorial? Please help.
I actually tried it but whenever I run that command I get this error: `C:\Users\shiva\AppData\Local\Programs\Python\Python310\python.exe: Error while finding module specification for 'utils.load_from_hub' (ModuleNotFoundError: No module named 'utils')
you need to follow the install instructions from the RL Zoo (links is in the page) and be in the RL Zoo folder.
Yes, I will try that but my question is why doesn't the model that I installed working as shown in the tutorial? Please help.
Please help. Thanks.
Please help us to help you by filling completely the issue template.
Please help us to help you by filling completely the issue template.
Hi, I have done that. Please check this link: https://github.com/DLR-RM/stable-baselines3/issues/1020