lab
lab copied to clipboard
Build a Docker image
I saw that OSX isn't supported, so I'm working on creating a docker file that should make it easier to experiment with it.
Thanks. Alternatively, if you can figure out how to make it build and run on MacOSX, I'd also be interested to see what needs to be changed to make it work.
https://github.com/deepmind/lab/pull/24 This is working failrly well, at least to build the image and run headless. I'm getting some odd errors related to OpenGL though. Mesa should fall back to using one of the software drivers if it can't find a hardware version, but things are acting strangely.
After starting an vnc4server
session on display 1 and then running glxinfo, I get:
glxinfo
name of display: :1
libGL error: failed to load driver: swrast
Segmentation fault
Using vncserver (tightvnc which is the default in 14.04) and tigervnc, I get this:
glxinfo
name of display: :1
Error: couldn't find RGB GLX visual or fbconfig
Error: couldn't find RGB GLX visual or fbconfig
Do you expect deep mind lab to work like openai universe? Just import a python package and it does rest of magic of importing containers and starting the game?
I didn't know universe could do all that :)
Plan is to hook it up to universe to use those agents as an addition/alternative. I'd love to have it take advantage of any other feature to make is as simple as possible. Do you happen to know where the logic for that is in the source?
@karpathy @nottombrown <- This folks would be the experts.
I'm not entirely 100% on how to integrate a brand new environment.
I believe openai/universe is a registry built upon openai/gym
The registry of environment happens here: https://github.com/openai/universe/blob/master/universe/init.py
I would expect setup to be something like this after installing docker.
import gym
import deeplab # register the deeplab environments
env = gym.make('deeplab.seekAvoidArena-v0')
env.configure(remotes=1) # create one Docker container
observation_n = env.reset()
while True:
# your agent generates action_n at 60 frames per second
action_n = [[('KeyEvent', 'ArrowUp', True)] for ob in observation_n]
observation_n, reward_n, done_n, info = env.step(action_n)
env.render()
@nojvek We haven't yet released the tools necessary to integrate new environments into universe
. But after we do, you should be able to create a new universe environment docker container containing deeplab
and implementing the universe environment server api. It would then work as you've described.
If you're interested in getting beta access to our universe integration tools to help dockerize deeplab
you can email me at [email protected]
@nottombrown I just emailed you.
@frankcarey @tkoeppe @nojvek
I've built up docker to work with DeepMind Lab
as Universe
(I've just merged your docker with some desktop, it could be looks a bit overkill for now, but it works for me from scratch)
You can find appropriate dockers here:
https://hub.docker.com/u/deeplearninc/dashboard/
Some description on how to use it or build your own one:
https://github.com/deeplearninc/relaax#deepmind-lab
I've got this working well now. Checkout the last commits and comments in PR #24
@frankcarey yup, you have to wrap such things in xvfb (the same true for gym, universe and others, that use this approach)