Akash Saravanan
Akash Saravanan
I don't remember where I got the files from but I put them into my repo so that I can set up the environment anywhere. Feel free to grab the...
I've submitted a new version of selfplay on #322 but I'm curious as to which other example code doesn't work. I could take a look at it if you can...
If you pip installed poke-env, `rl_with_open_ai_gym_wrapper.py` should work. If you installed it directly from Github, `rl_with_new_open_ai_gym_wrapper.py`. Could you confirm which version of poke-env you have and which script you tried?...
Oh I think I see the issue. Try using `from poke_env.environment import AbstractBattle` instead of `from poke_env.abstract_battle import AbstractBattle`.
This might be related to #246
Another weird scenario: ``` > print(battle.won, battle.lost, battle.finished, battle.turn) None False False 8 > print(battle.force_switch, force_switch, battle.trapped) False False True > print(battle.team) {'p1: Scyther': scyther (pokemon object) [Active: False, Status:...
Unfortunately not. This was the only error that popped up while training and I haven't been able to reproduce it since then. I've had the other two scenarios I mentioned...
For context, I'm essentially trying to create a state that composes of basically all the available battle information. [The figure in this paper](https://ieee-cog.org/2019/papers/paper_175.pdf) more or less covers everything I'm looking...
> In the gym API, your bottleneck might be on the model side, where batching can be helpful. I'm aware of a couple of implementations that implement this, but they...
Thanks @MatteoH2O1999! @mancho2000 this is what I ended up using for my implementation of self-play. Note that I coded my own implementation of DQN/PPO, so those bits might not match...