DeepQLearning.jl
DeepQLearning.jl copied to clipboard
Support of AbtractEnvironment
This solver uses some function that are broader than the minimal interface defined in RLInterface and relies on internal fields such as env.problem in many places.
Ideally, the solver should support an RL environment defined just using RLInterface.jl and without necessarily an MDP or POMDP object associated with it.
Yes, this is definitely important. In my class, more students had success with this package than any other, but this made it a little confusing to use.
Right now it is really designed to work with POMDPs.jl. Any AbtractEnvironment could technically be implemented using MDP and the generative interface.
initialstate, gen and actions is all what's needed I believe.