lab
lab copied to clipboard
Customizability of the environment
I have a few questions on what is possible to customize in the environments. If it is possible, could someone point me to the file that I would go to do that customization? That would be very helpful.
- Where exactly would you go to edit the themes for the maze? Can you create custom themes by uploading images to be used as the sky / walls / floor?
- Can you change the walls / floor of specific areas of a maze? It seems like you would use the variation layer for this - can you make it deterministic?
- I'm assuming no but just confirming, are there sounds in the environment? The agent can only receive visual input? Is there any other established way to simulate other senses?
- I have seen #68 , but I don't quite understand how to use the DEBUG.CAMERA.TOP_DOWN. With this would I be able to visualize an episode of training from the top down, just not use it as an observation input?
- Can you extract the locations (x,y, direction) of agents over time?
- Can you program events in the environment when certain events occur, such as when the agent goes to a certain location?
See inline
- Where exactly would you go to edit the themes for the maze? Can you create custom themes by uploading images to be used as the sky / walls / floor? You can edit the themes here: https://github.com/deepmind/lab/tree/master/game_scripts/themes. You can add/edit textures here: https://github.com/deepmind/lab/tree/master/assets/textures/map/lab_games
- Can you change the walls / floor of specific areas of a maze? It seems like you would use the variation layer for this - can you make it deterministic? You can set the variation layer using a string with letters A-Z. And you can adjust the randomization by editing the themes.
- I'm assuming no but just confirming, are there sounds in the environment? The agent can only receive visual input? Is there any other established way to simulate other senses? We currently don't use audio as a signal. But you can listen to and forward events to the agent (see 6).
- I have seen #68 https://github.com/deepmind/lab/issues/68 , but I don't quite understand how to use the DEBUG.CAMERA.TOP_DOWN. With this would I be able to visualize an episode of training from the top down, just not use it as an observation input? This observation is only observable from the agent. You could use a library like PyGame to display the results.
- Can you extract the locations (x,y, direction) of agents over time? Yes you can, there are two observations: 'DEBUG.POS.TRANS' https://github.com/deepmind/lab/blob/master/game_scripts/decorators/debug_observations.lua#L338 And: 'DEBUG.PLAYERS.EYE.POS' https://github.com/deepmind/lab/blob/master/game_scripts/decorators/debug_observations.lua#L347 Please look in this file for other observations.
- Can you program events in the environment when certain events occur, such as when the agent goes to a certain location? Yes you can, see https://github.com/deepmind/lab/blob/master/game_scripts/levels/tests/event_test.lua to see how to trigger events. Also see https://github.com/deepmind/lab/blob/master/game_scripts/common/position_trigger.lua
On Mon, 3 Dec 2018 at 19:10, Ryan1500 [email protected] wrote:
I have a few questions on what is possible to customize in the environments. If it is possible, could someone point me to the file that I would go to do that customization? That would be very helpful.
- Where exactly would you go to edit the themes for the maze? Can you create custom themes by uploading images to be used as the sky / walls / floor?
- Can you change the walls / floor of specific areas of a maze? It seems like you would use the variation layer for this - can you make it deterministic?
- I'm assuming no but just confirming, are there sounds in the environment? The agent can only receive visual input? Is there any other established way to simulate other senses?
- I have seen #68 https://github.com/deepmind/lab/issues/68 , but I don't quite understand how to use the DEBUG.CAMERA.TOP_DOWN. With this would I be able to visualize an episode of training from the top down, just not use it as an observation input?
- Can you extract the locations (x,y, direction) of agents over time?
- Can you program events in the environment when certain events occur, such as when the agent goes to a certain location?
— You are receiving this because you are subscribed to this thread. Reply to this email directly, view it on GitHub https://github.com/deepmind/lab/issues/114, or mute the thread https://github.com/notifications/unsubscribe-auth/ADVSeQglvWoNsTAyO8ztM1xN739S7zeIks5u1XcQgaJpZM4Y_ORl .
-- Charles Beattie | Software Engineer | [email protected] | +44 7766760067
Thank you for the help. Still testing some of these out.
As a follow-up, is there a way to make part of a wall transparent, or otherwise change the size of the walls? When I upload partially transparent .tga images I seem to get something like this (where the top half is supposed to be transparent):
You will need to mark the textures in the shader as transparent. This will slowdown the environment. If you want the textures to be completely transparent we use a texture called 'textures/map/poltergeist'. The source of the shader is in 'assets/scripts/poltergeist.shader' Try adding your own shaders for your custom textures. The wall and floor shaders are located here: 'assets/scripts/dm_lab.shader'.
On Mon, 7 Jan 2019 at 16:06, Ryan1500 [email protected] wrote:
Thank you for the help. Still testing some of these out.
As a follow-up, is there a way to make part of a wall transparent, or otherwise change the size of the walls? When I upload partially transparent .tga images I seem to get something like this (where the top half is supposed to be transparent):
[image: image] https://user-images.githubusercontent.com/9020650/50778419-e425e480-126b-11e9-8569-e5ba36f24cce.png
— You are receiving this because you commented. Reply to this email directly, view it on GitHub https://github.com/deepmind/lab/issues/114#issuecomment-451985188, or mute the thread https://github.com/notifications/unsubscribe-auth/ADVSeQmhqcVoaw0jl43_EtZbYQ2D9zyBks5vA3CEgaJpZM4Y_ORl .
-- Charles Beattie | Software Engineer | [email protected] | +44 7766760067
When I do what you mention, using following as my shader
textures/map/lab_games/custom
{
qer_editorimage textures/map/lab_games/half-wall-1.tga
surfaceparm trans
surfaceparm playerclip
surfaceparm nolightmap
{
map textures/map/lab_games/half-wall-1.tga
blendfunc blend
}
}
The walls start bugging out, showing the wall outlines as well as the walls behind it, like this:
I'm not sure where to look to fix this.