dopamine
dopamine copied to clipboard
Can i use my own game environnement ?
Hi,
I want to use Dopamine with my own game environnement. I don't use asterix, pong ou atari environnement.
Dopamine allows to do stuff like that ?
Thanks
Yes, although at the moment you will need to modify some Atari-specific parameters (convolutional network, observation shape, etc.) I believe most of that code is in place – but stay tuned for an update that will make all of this easier.
Hi Marc,
Thank you for the answer!
So I think that I need to clone atari folder (https://github.com/google/dopamine/tree/master/dopamine/atari) and I change Atari-specific parameters in preprocessing, run_experiment.py and in train.py , it's correct?
Merry Christmas !
Yes, that's right. Look around the open/closed issues here, I believe other people have generate similar code. Good luck!
I believe that my team has just succeed doing that. You can check out our git https://github.com/KatyNTsachi/Hierarchical-RL the gym game is in folder the folder "gym_cars"
Hope this will help
Hi tsachiblau,
Thank you ! it's a good job ! I'll try to create my own env from your code.
Thank you.
@tsachiblau Can you please describe what all changes you made to create your own environment. Also, can we extend the "game-like" environment to something very simple. For example, below is the environment I want to replicate.
Board Environment:
|O|O|*|O|O|
|O|O|O|O|O|
|O|O|O|O|O|
|O|#|O|$|O|
|$|O|O|O|O|
State space: (x,y) where 0 <= x <= 4, 0 <= y <= 4
Action space: {0,1,2,3} where 0 == Left, 1 == Right, 2 == Down, 3 == Up
Reward: -1 for taking one step, -INF for out of bound, -INF for landing at #, 6 for $, 0 for O
Transition function: 0:(x,y) -> (x-1,y), 1:(x,y) -> (x+1,y), 2:(x,y) -> (x, y-1), 3:(x,y) -> (x,y+1)
Starting state: (0, 2) Terminal state: (3, 3)
@vishal-keshav first of all you can check out our git https://github.com/KatyNTsachi/Hierarchical-RL right now it's basically the original dopamine with our env in it.
If you want to see all the changes that we did you can compare our code to the original dopamin code in this link: https://github.com/KatyNTsachi/Hierarchical-RL/commit/40e995d9ab8cdab396415ea77c9041a53e3acbb5
You also need to follow this guide for creating you own env https://stackoverflow.com/questions/45068568/is-it-possible-to-create-a-new-gym-environment-in-openai
@tsachiblau or any others who might know,
I just want to clarify, that we can use dopamine with our own non-openai environments, as long as we can connect the observations? In my case, I am trying to use the Deepmind Lab environment.
I think that point of dopamine is to make kind of benchmark for RL so adding your own env even possible, maybe meaningless.
Regardless, the question still stands with an unclear answer
@ryanprinster you can create your our env and then wrap it with this guide: https://stackoverflow.com/questions/45068568/is-it-possible-to-create-a-new-gym-environment-in-openai
then you will have gym env(with your own game)