unitree_rl_gym icon indicating copy to clipboard operation
unitree_rl_gym copied to clipboard

Gym cuda error: invalid resource handle

Open 15380831711 opened this issue 9 months ago • 2 comments

*** Warning: failed to preload CUDA lib *** Warning: failed to preload PhysX libs Importing module 'gym_38' (/home/jqr/robot/isaacgym/python/isaacgym/_bindings/linux-x86_64/gym_38.so) Setting GYM_USD_PLUG_INFO_PATH to /home/jqr/robot/isaacgym/python/isaacgym/_bindings/linux-x86_64/usd/plugInfo.json PyTorch version 2.4.1+cu121 Device count 1 /home/jqr/robot/isaacgym/python/isaacgym/_bindings/src/gymtorch Using /home/jqr/.cache/torch_extensions/py38_cu121 as PyTorch extensions root... Emitting ninja build file /home/jqr/.cache/torch_extensions/py38_cu121/gymtorch/build.ninja... Building extension module gymtorch... Allowing ninja to set a default number of workers... (overridable by setting the environment variable MAX_JOBS=N) ninja: no work to do. Loading extension module gymtorch... Setting seed: 1 Not connected to PVD /buildAgent/work/99bede84aa0a52c2/source/physx/src/gpu/PxPhysXGpuModuleLoader.cpp (148) : internal error : libcuda.so!

[Warning] [carb.gym.plugin] Failed to create a PhysX CUDA Context Manager. Falling back to CPU. Physics Engine: PhysX Physics Device: cpu GPU Pipeline: disabled WARNING: lavapipe is not a conformant vulkan implementation, testing use only. /home/jqr/miniconda3/envs/unitree-rl/lib/python3.8/site-packages/torch/functional.py:513: UserWarning: torch.meshgrid: in an upcoming release, it will be required to pass the indexing argument. (Triggered internally at ../aten/src/ATen/native/TensorShape.cpp:3609.) return _VF.meshgrid(tensors, **kwargs) # type: ignore[attr-defined] [Error] [carb.gym.plugin] Gym cuda error: invalid resource handle: ../../../source/plugins/carb/gym/impl/Gym/GymPhysX.cpp: 6137 [Error] [carb.gym.plugin] Must enable GPU pipeline to use state tensors Traceback (most recent call last): File "legged_gym/scripts/train.py", line 18, in train(args) File "legged_gym/scripts/train.py", line 12, in train env, env_cfg = task_registry.make_env(name=args.task, args=args) File "/home/jqr/robot/unitree_rl_gym/legged_gym/utils/task_registry.py", line 68, in make_env env = task_class( cfg=env_cfg, File "/home/jqr/robot/unitree_rl_gym/legged_gym/envs/base/legged_robot.py", line 45, in init self._init_buffers() File "/home/jqr/robot/unitree_rl_gym/legged_gym/envs/g1/g1_env.py", line 45, in _init_buffers super()._init_buffers() File "/home/jqr/robot/unitree_rl_gym/legged_gym/envs/base/legged_robot.py", line 440, in _init_buffers self.rpy = get_euler_xyz_in_tensor(self.base_quat) File "/home/jqr/robot/unitree_rl_gym/legged_gym/utils/isaacgym_utils.py", line 17, in get_euler_xyz roll = torch.atan2(sinr_cosp, cosr_cosp) RuntimeError: CUDA error: an illegal memory access was encountered CUDA kernel errors might be asynchronously reported at some other API call, so the stacktrace below might be incorrect. For debugging consider passing CUDA_LAUNCH_BLOCKING=1 Compile with TORCH_USE_CUDA_DSA to enable device-side assertions.

15380831711 avatar Mar 13 '25 02:03 15380831711

(unitree-rl2) jqr@ty:~/robot/unitree_rl_gym$ python legged_gym/scripts/train.py --task=g1 Importing module 'gym_38' (/home/jqr/robot/isaacgym/python/isaacgym/_bindings/linux-x86_64/gym_38.so) Setting GYM_USD_PLUG_INFO_PATH to /home/jqr/robot/isaacgym/python/isaacgym/_bindings/linux-x86_64/usd/plugInfo.json No CUDA runtime is found, using CUDA_HOME='/usr/local/cuda-12.1' PyTorch version 2.3.1 Device count 1 /home/jqr/robot/isaacgym/python/isaacgym/_bindings/src/gymtorch Using /home/jqr/.cache/torch_extensions/py38_cu121 as PyTorch extensions root... Emitting ninja build file /home/jqr/.cache/torch_extensions/py38_cu121/gymtorch/build.ninja... Building extension module gymtorch... Allowing ninja to set a default number of workers... (overridable by setting the environment variable MAX_JOBS=N) ninja: no work to do. Loading extension module gymtorch... Setting seed: 1 Traceback (most recent call last): File "legged_gym/scripts/train.py", line 18, in train(args) File "legged_gym/scripts/train.py", line 12, in train env, env_cfg = task_registry.make_env(name=args.task, args=args) File "/home/jqr/robot/unitree_rl_gym/legged_gym/utils/task_registry.py", line 68, in make_env env = task_class( cfg=env_cfg, File "/home/jqr/robot/unitree_rl_gym/legged_gym/envs/base/legged_robot.py", line 41, in init super().init(self.cfg, sim_params, physics_engine, sim_device, headless) File "/home/jqr/robot/unitree_rl_gym/legged_gym/envs/base/base_task.py", line 41, in init self.obs_buf = torch.zeros(self.num_envs, self.num_obs, device=self.device, dtype=torch.float) File "/home/jqr/miniconda3/envs/unitree-rl2/lib/python3.8/site-packages/torch/cuda/init.py", line 293, in _lazy_init torch._C._cuda_init() RuntimeError: No CUDA GPUs are available

15380831711 avatar Mar 13 '25 09:03 15380831711

hello,could you tell if you have solved this question?if it is, can you tell me how to solve it?

t1213596901 avatar May 13 '25 16:05 t1213596901