DeepQLearning.jl
DeepQLearning.jl copied to clipboard
Tests still contain "using RLInterface"
I think that is redundant as we switched to CommonRLInterface and RLInterface is no longer in dependencies.
Well, I find the issue.
The RLInterface dependency is in prototype.jl file that is not part of tests.
I wanted to take a look at how to use DeepQLearning with POMDPs. There is an example with SubHunt.jl. Unfortunately not up to date.
Ok, so you are saying that there actually is not any problem related to RLInterface, right now? @MaximeBouton , could you leave a short comment explaining what prototype.jl and flux_test.jl are in those files?
I wanted to take a look at how to use DeepQLearning with POMDPs. There is an example with SubHunt.jl. Unfortunately not up to date.
Where is the example? After loading POMDPModelTools, you should be able to do convert(CommonRLInterface.AbstractEnv, SubHuntPOMDP()) or something like that. Deep Q learning will probably not work too well on that problem though - it is quite hard to plan without a belief updater in that environment.
I was looking for an example on how to use POMDPs with DeepQLearning.jl, found the prototype,jl where @MaximeBouton probably tested out if solver correctly works with SubHunt.jl. Then I somehow found out that RLInterface.jl is deprecated.
I think it may be beneficial to have an example POMDPs usage. SubHunt could serve as that.
However, the example should use CommonRLInterface. Also, the SubHunt.jl would need to have initialobs(::SubHuntPOMDP, ::SubState) defined.
@Omastto1 thank you for reporting this, I should remove this prototype.jl file. There is an example with POMDPs in the tests though, check https://github.com/JuliaPOMDP/DeepQLearning.jl/blob/master/test/runtests.jl#L150 DDRQN on tigerPOMDP
(let's close once I remove the file)