OpenAIGym.jl icon indicating copy to clipboard operation
OpenAIGym.jl copied to clipboard

OpenAI's Gym binding for Julia

Results 11 OpenAIGym.jl issues
Sort by recently updated
recently updated
newest added

For architecture search across a variety of environments, it's crucial to access the parameters of the observation and action spaces. How do you do that with this library?

How could we add the DeepMind BSuite envs? They have a wrapper for OpenAI Gym: https://github.com/deepmind/bsuite#using-bsuite-in-openai-gym-format Got ProcGen to work if anyone wants it: ``` using OpenAIGym function run_env(env_name, n_episodes)...

This commit 28d5953 introduced a bug. While iterating an episode, identifcal values are obtained for s and s1, e.g. ``` for (s,a,r,s1) in ep ... end debug> s 2-element PyArray{Float64,1}:...

Add function to seed the RNG of the underlying python/gym.

Uses `unsafe_gettpl!` from https://github.com/JuliaPy/PyCall.jl/pull/486 to get faster access to the elements from the `s, r, done, info` tuple returned by gym `env.step` functions in Python. This is two extra commits...

I recently became interested in reinforcement learning, so I tried my luck with these environments by OpenAI. I noticed, however, quite a huge drop in performance in comparison to a...

In the Go environment, the moves are passed as a (3,9,9) array from the python side, and the underlying memory seems shuffled on the Julia side. Probably want this to...

Something like: ``` julia pick_action(actions, state) = rand(actions) result = run_episode(pick_action) ```

Something like: ``` julia type Episode steps_remaining::Int end # iteration methods... decrement steps_remaining and get the state env = Env(....) for state in Episode(env, maxsteps = 100) # pick an...