openpi
openpi copied to clipboard
LIBERO docker example error
Hi, I'm trying to run the LIBERO example with docker in https://github.com/Physical-Intelligence/openpi/tree/main/examples/libero. But I got the following error while running docker compose command. How can I fix this issue?
...
openpi_server-1 | INFO:root:Loading model...
openpi_server-1 | INFO:2025-05-13 06:40:57,651:jax._src.xla_bridge:945: Unable to initialize backend 'rocm': module 'jaxlib.xla_extension' has no attribute 'GpuAllocatorConfig'
openpi_server-1 | INFO:jax._src.xla_bridge:Unable to initialize backend 'rocm': module 'jaxlib.xla_extension' has no attribute 'GpuAllocatorConfig'
openpi_server-1 | INFO:2025-05-13 06:40:57,651:jax._src.xla_bridge:945: Unable to initialize backend 'tpu': INTERNAL: Failed to open libtpu.so: libtpu.so: cannot open shared object file: No such file or directory
openpi_server-1 | INFO:jax._src.xla_bridge:Unable to initialize backend 'tpu': INTERNAL: Failed to open libtpu.so: libtpu.so: cannot open shared object file: No such file or directory
openpi_server-1 | INFO:absl:orbax-checkpoint version: 0.11.1
openpi_server-1 | INFO:absl:Created BasePyTreeCheckpointHandler: pytree_metadata_options=PyTreeMetadataOptions(support_rich_types=False), array_metadata_store=None
openpi_server-1 | INFO:absl:Restoring checkpoint from /openpi_assets/openpi-assets/checkpoints/pi0_fast_libero/params.
runtime-1 | INFO:root:Still waiting for server...
runtime-1 | INFO:root:Still waiting for server...
openpi_server-1 | INFO:absl:[thread=MainThread] Failed to get flag value for EXPERIMENTAL_ORBAX_USE_DISTRIBUTED_PROCESS_ID.
openpi_server-1 | INFO:absl:[process=0] /jax/checkpoint/read/bytes_per_sec: 485.6 MiB/s (total bytes: 5.4 GiB) (time elapsed: 11 seconds) (per-host)
openpi_server-1 | INFO:absl:Finished restoring checkpoint from /openpi_assets/openpi-assets/checkpoints/pi0_fast_libero/params.
openpi_server-1 | INFO:absl:[process=0][thread=MainThread] Skipping global process sync, barrier name: Checkpointer:restore
runtime-1 | INFO:root:Still waiting for server...
processor_config.json: 100% 253/253 [00:00<00:00, 2.87MB/s]
processing_action_tokenizer.py: 100% 6.14k/6.14k [00:00<00:00, 29.4MB/s]
openpi_server-1 | A new version of the following files was downloaded from https://huggingface.co/physical-intelligence/fast:
openpi_server-1 | - processing_action_tokenizer.py
openpi_server-1 | . Make sure to double-check they do not contain any added malicious code. To avoid downloading new versions of the code file, you can pin a revision.
tokenizer_config.json: 100% 322/322 [00:00<00:00, 3.38MB/s]
tokenizer.json: 100% 687k/687k [00:00<00:00, 6.30MB/s]
special_tokens_map.json: 100% 3.00/3.00 [00:00<00:00, 24.8kB/s]
runtime-1 | INFO:root:Still waiting for server...
openpi_server-1 | Some kwargs in processor config are unused and will not have any effect: time_horizon, vocab_size, action_dim, scale, min_token.
openpi_server-1 | Some kwargs in processor config are unused and will not have any effect: time_horizon, vocab_size, action_dim, scale, min_token.
openpi_server-1 | INFO:root:Norm stats not found in /app/assets/pi0_fast_libero/physical-intelligence/libero, skipping.
openpi_server-1 | INFO:root:Loaded norm stats from /openpi_assets/openpi-assets/checkpoints/pi0_fast_libero/assets/physical-intelligence/libero
openpi_server-1 | INFO:root:Creating server (host: lgresearch-System-Product-Name, ip: 127.0.1.1)
openpi_server-1 | INFO:websockets.server:server listening on 0.0.0.0:8000
openpi_server-1 | INFO:websockets.server:connection open
openpi_server-1 | INFO:root:Connection from ('127.0.0.1', 51798) opened
0% 0/10 [00:00<?, ?it/s][Warning]: datasets path /app/third_party/libero/libero/datasets does not exist!
runtime-1 | [Warning]: datasets path /app/third_party/libero/libero/datasets does not exist!
runtime-1 | libEGL warning: egl: failed to create dri2 screen
runtime-1 | libEGL warning: egl: failed to create dri2 screen
0% 0/10 [00:01<?, ?it/s]
runtime-1 | Traceback (most recent call last):
runtime-1 | File "examples/libero/main.py", line 219, in <module>
runtime-1 | tyro.cli(eval_libero)
runtime-1 | File "/.venv/lib/python3.8/site-packages/tyro/_cli.py", line 191, in cli
runtime-1 | return run_with_args_from_cli()
runtime-1 | File "examples/libero/main.py", line 85, in eval_libero
runtime-1 | env, task_description = _get_libero_env(task, LIBERO_ENV_RESOLUTION, args.seed)
runtime-1 | File "examples/libero/main.py", line 194, in _get_libero_env
runtime-1 | env = OffScreenRenderEnv(**env_args)
runtime-1 | File "/app/third_party/libero/libero/libero/envs/env_wrapper.py", line 161, in __init__
runtime-1 | super().__init__(**kwargs)
runtime-1 | File "/app/third_party/libero/libero/libero/envs/env_wrapper.py", line 56, in __init__
runtime-1 | self.env = TASK_MAPPING[self.problem_name](
runtime-1 | File "/app/third_party/libero/libero/libero/envs/problems/libero_tabletop_manipulation.py", line 40, in __init__
runtime-1 | super().__init__(bddl_file_name, *args, **kwargs)
runtime-1 | File "/app/third_party/libero/libero/libero/envs/bddl_base_domain.py", line 135, in __init__
runtime-1 | super().__init__(
runtime-1 | File "/.venv/lib/python3.8/site-packages/robosuite/environments/manipulation/manipulation_env.py", line 162, in __init__
runtime-1 | super().__init__(
runtime-1 | File "/.venv/lib/python3.8/site-packages/robosuite/environments/robot_env.py", line 214, in __init__
runtime-1 | super().__init__(
runtime-1 | File "/.venv/lib/python3.8/site-packages/robosuite/environments/base.py", line 143, in __init__
runtime-1 | self._reset_internal()
runtime-1 | File "/app/third_party/libero/libero/libero/envs/bddl_base_domain.py", line 735, in _reset_internal
runtime-1 | super()._reset_internal()
runtime-1 | File "/.venv/lib/python3.8/site-packages/robosuite/environments/robot_env.py", line 510, in _reset_internal
runtime-1 | super()._reset_internal()
runtime-1 | File "/.venv/lib/python3.8/site-packages/robosuite/environments/base.py", line 299, in _reset_internal
runtime-1 | render_context = MjRenderContextOffscreen(self.sim, device_id=self.render_gpu_device_id)
runtime-1 | File "/.venv/lib/python3.8/site-packages/robosuite/utils/binding_utils.py", line 210, in __init__
runtime-1 | super().__init__(sim, offscreen=True, device_id=device_id, max_width=max_width, max_height=max_height)
runtime-1 | File "/.venv/lib/python3.8/site-packages/robosuite/utils/binding_utils.py", line 78, in __init__
runtime-1 | self.gl_ctx = GLContext(max_width=max_width, max_height=max_height, device_id=self.device_id)
runtime-1 | File "/.venv/lib/python3.8/site-packages/robosuite/renderers/context/egl_context.py", line 123, in __init__
runtime-1 | raise ImportError(
runtime-1 | ImportError: Cannot initialize a EGL device display. This likely means that your EGL driver does not support the PLATFORM_DEVICE extension, which is required for creating a headless rendering context.
runtime-1 | Exception ignored in: <function EGLGLContext.__del__ at 0x7a49e7fdc160>
runtime-1 | Traceback (most recent call last):
runtime-1 | File "/.venv/lib/python3.8/site-packages/robosuite/renderers/context/egl_context.py", line 155, in __del__
runtime-1 | self.free()
runtime-1 | File "/.venv/lib/python3.8/site-packages/robosuite/renderers/context/egl_context.py", line 146, in free
runtime-1 | if self._context:
runtime-1 | AttributeError: 'EGLGLContext' object has no attribute '_context'
runtime-1 | Exception ignored in: <function MjRenderContext.__del__ at 0x7a49e7fdc310>
runtime-1 | Traceback (most recent call last):
runtime-1 | File "/.venv/lib/python3.8/site-packages/robosuite/utils/binding_utils.py", line 198, in __del__
runtime-1 | self.con.free()
runtime-1 | AttributeError: 'MjRenderContextOffscreen' object has no attribute 'con'
openpi_server-1 | INFO:root:Connection from ('127.0.0.1', 51798) closed
runtime-1 exited with code 1