Metaworld
Metaworld copied to clipboard
Custom Camera Views
Hello,
Thank you for the excellent code.
I'm trying to use the setup to collect simulated RGB data for a project but I haven't been able to find a way to set custom camera angle while rendering. All I can see is some predefined modes like topview
. Anyway I can setup custom camera views for the tasks?
Thanks!
@karttikeya we haven't extensively tested metaworld for image-based learning yet, which is why you're finding that these features are either half-baked or lacking.
Image-based learning is a feature area we plan on improving in future versions of metaworld, but I can't give you any hard commitments on when you'll see better support for this other than "some time in (early) 2020."
It sounds like you have both the motivation and the expertise to add or improve the features you need for your project. If you do make improvements, we'd love it if you take a look at CONTRIBUTING.md
and send us a pull request with your improvements.
Note:
The only public API in metaworld right now is the metaworld.benchmarks
package. You should consider the APIs for the individual environment classes private, and thus highly unstable.
Okay, thanks for letting me know! I see that there aren't any fully functional camera setup methods implemented yet, however is it possible to share the configurations (or how to do them) for the limited case of camera positions in this GIF?
Of course, if I end up implementing the full functionality, I would be happy to put in a PR as well :)
Here's a tiny hack meanwhile.
I modified viewer_setup()
method in mujoco_env.py
with the following statements
...
def viewer_setup(self):
self.viewer.cam.azimuth = -20
self.viewer.cam.elevation = -20
...
Gets you this perspective
So overwriting the viewer_setup
API lets you change perspective when you do env.render('human')
. But when you use env.get_image
API, it still uses the default view. Btw env.render('rgb_array')
crashes for me for some reason.
What's the fundamental difference between setting up rendering via accessing the viewers
vs calling self.sim.render
?
Edit : I found that setting the device id to -1
in self.viewer = mujoco_py.MjRenderContextOffscreen(self.sim, -1)
inside mujoco_env.py
fixes the rendering issue in rgb
mode.
Yes, modifying the viewer_setup()
api does the trick. In my case (headless remote server, No GLFW) to use env.sim.render
instead of env.render
using the following snippet to directly create and adjust the viewer
env.viewer = mujoco_py.MjRenderContextOffscreen(env.sim, -1)
env.viewer.cam.azimuth = 205
env.viewer.cam.elevation = -170
env.viewer.cam.distance = 2.3
env.viewer.cam.lookat[0] = 1.1
env.viewer.cam.lookat[1] = 1.1
env.viewer.cam.lookat[2] = -0.1
renders a nice GIF-esque angle.
On a related note, would there be a way to render this "effector ghost", ie same as usual except without the sawyer arm.
As Meta-World ownership has been transferred from the RLWorkGroup to Farama Foundation, I am closing this issue. If there are any questions or requests for features please join our Discord
Can you please share the code how you managed to do env.render("rgb_array")? I am using V2 Thanks!
It's been a while since I ran this but back then I modified the def _get_viewer(self, mode):
as mentioned above. The code is here. Not sure if this would still work with the recent versions.
For anyone who need modify the camera with V2 version, here is one solution.
from gymnasium.envs.mujoco.mujoco_rendering import MujocoRenderer
DEFAULT_CAMERA_CONFIG = { "distance": 2, "azimuth": 215, "elevation": -20.0, "lookat": np.array([0, 0.5, 0.0]), }
env.mujoco_renderer = MujocoRenderer(env.model, env.data, DEFAULT_CAMERA_CONFIG)
Image.fromarray(env.render())