CommonRLInterface.jl
CommonRLInterface.jl copied to clipboard
A minimal reinforcement learning environment interface with additional opt-in features.
I recently wrote a proposal about multiple agent problems, and it convinced me that we should have separate interfaces (and maybe separate abstract types) for different multi-agent formalisms. A few...
Currently, you can either provide argument instances or types to `provided`. This is not recommended: https://docs.julialang.org/en/v1/manual/style-guide/#Avoid-confusion-about-whether-something-is-an-instance-or-a-type because some arguments may themselves be types. `provided` should only accept argument instances.
After discussion in #40 , I think there is good reason to get rid of `AbstractMarkovEnv` and `AbstractZeroSumEnv`. I think we will be able to eventually handle multiplayer environments with...
It would be good to move the docs currently in the README to Documenter.jl. I haven't set it up yet though. This is a good guide for how to think...
When we start to handle more general games - we may want to follow the [wikipedia description of an extensive form game](https://en.wikipedia.org/wiki/Extensive-form_game) and make sure that we can express all...
Should we add functions `parse_state(env, string)` and `parse_action(env, string)` to help building interactive debugging tools? It would be tempting to just let the user define `parse_state` by overloading `Base.parse`. However,...
In #1, several people talked about a way for algorithm writers to specify requirements. This seems like a good idea, but designing it will be challenging. Here are a few...
We need to decide on a concept for spaces (e.g. the action space and observation space). One option would be to have an `AbstractSpace` type. I am against this. Instead,...
As discussed in #6 , we won't have any automatic defaults, but we should make it easy to adopt a default if needed. Someone needs to design this feature.