Adds dynamic and multi-mesh support for ray caster
Description
This PR extends the ray caster to support multiple meshes in mesh_prim_paths, enabling it to interact with complex environments. Additionally, it now supports dynamic meshes, allowing real-time updates to mesh positions during simulation.
This update is fully backward-compatible with existing single-mesh configuration.
A test file is included which shows that multiple and dynamic meshes are processed by the updated ray caster.
Fixes #1804 #1395 #1259 #553
Type of change
- New feature (non-breaking change which adds functionality)
- This change requires a documentation update
Checklist
- [x] I have run the
pre-commitchecks with./isaaclab.sh --format - [x] I have made corresponding changes to the documentation
- [x] My changes generate no new warnings
- [x] I have added tests that prove my fix is effective or that my feature works
- [ ] I have updated the changelog and the corresponding version in the extension's
config/extension.tomlfile - [x] I have added my name to the
CONTRIBUTORS.mdor my name already exists there
Thanks for the MR. This is an interesting addition. However, using a for loop over all the meshes will be rather slow. Especially if we go for multiple meshes per environment -- two mesh per environment with 4096 environments will mean 8192 separate warp kernel calls. This is rather expensive as launching a kernel takes time.
Is it possible to make a new warp kernel that parallelizes this as well?
it gives an error if we're adding raycaster to observations and resetting an env. you have some mismatch in array shapes, i believe this on lines 292-297 fixes it:
mask = distance < final_distances[env_ids] final_distances[env_ids][mask] = distance[mask] final_hits[env_ids][mask] = hit[mask] # update the ray hit data for the specified environments self._data.ray_hits_w[env_ids] = final_hits[env_ids]
Thanks for catching that! I have applied the fix 😃
Thanks for the MR. This is an interesting addition. However, using a for loop over all the meshes will be rather slow. Especially if we go for multiple meshes per environment -- two mesh per environment with 4096 environments will mean 8192 separate warp kernel calls. This is rather expensive as launching a kernel takes time.
Is it possible to make a new warp kernel that parallelizes this as well?
Thanks for reviewing the MR and for the feedback! I initially structured it with a loop over the meshes to keep the logic more straightforward. However, I agree that parallelizing the kernel calls for all meshes would be much more efficient, especially at scale with many environments. I’ll look into adding a new Warp kernel this week to handle multiple meshes.
I have now implemented a new Warp kernel (multi_mesh_raycast_kernel) to efficiently handle multiple meshes in parallel. The previous for-loop has been replaced with a single batched kernel that processes all rays against multiple meshes simultaneously. I have also added caching for mesh-to-environment mappings to reduce CPU overhead between frames. This should significantly improve the raycasting performance, especially for larger numbers of environments and meshes. Let me know if any further adjustments or improvements are needed.
Thanks for the PR.
It might also worth adding regex processing of mesh prims in the interactive scene after this https://github.com/isaac-sim/IsaacLab/blob/d7da02da62b46153da3dc3e54585eea078e0d9cb/source/isaaclab/isaaclab/scene/interactive_scene.py#L590
Added line
if hasattr(asset_cfg, "mesh_prim_paths"):
asset_cfg.mesh_prim_paths = [path.format(ENV_REGEX_NS=self.env_regex_ns) for path in asset_cfg.mesh_prim_paths]
Now this can be used as
camera_cfg: RayCasterCameraCfg = RayCasterCameraCfg(
....
mesh_prim_paths=["{ENV_REGEX_NS}/object"],
)
this here, at line 272 in ray_caster.py, takes a really long time to compute - adding 2 lidars increased my simulation time by 60 times. could it be possible to parallelize this too?
When I import models using OBJ, there is a mismatch between the displayed model and the mesh
For objects imported from a USD file that display correctly aligned with the mesh, if transformations are applied during initialization and multiple environments are included, it is unclear why only applying the mesh's rotation and scaling once, along with the prim's initial rotation once, is sufficient for proper raycasting,but not all models imported via USD files are compatible
under if child is not None:
# Save prim
pre_prim = prim
below base_points = np.asarray(mesh_prim.GetPointsAttr().Get()), add:
# 1. First, apply the mesh's own world transform (including scaling and rotation)
mesh_transform = np.array(omni.usd.get_world_transform_matrix(mesh_prim)).T
base_points = np.matmul(base_points, mesh_transform[:3, :3].T)
# 2. Next, apply the parent prim's (pre_prim) transform, but only keep the rotation part,
# removing the scaling factor from the parent transform to avoid double-scaling.
parent_transform = np.array(omni.usd.get_world_transform_matrix(pre_prim)).T
# Calculate the scaling factor of the parent transform (assume the average of the scaling along each axis)
scale_x = np.linalg.norm(parent_transform[:3, 0])
scale_y = np.linalg.norm(parent_transform[:3, 1])
scale_z = np.linalg.norm(parent_transform[:3, 2])
scale_factor = (scale_x + scale_y + scale_z) / 3.0
# Remove scaling to get a pure rotation matrix
pure_rot = parent_transform[:3, :3] / scale_factor
# Apply the parent's pure rotation
base_points = np.matmul(base_points, pure_rot.T)
When I add "/World/envs/env_0/Robot/RR_calf/collisions/RR_calf" to the mesh_prim_paths list, the LidarPattern radar data seems to show errors in scripts\demos\sensors\raycaster_sensor.py.
But when I built upon your demo using LidarPattern with multiple meshes and dynamic meshes, there were no issues.
ji'q
机器狗好像被等效成一个每个面缺掉一半的正方体
我发现机器狗里的方块没有正常被scale,一直不变,旋转和位移操作可以进行
而且用机器人本身的部件的mesh 会警告[isaaclab.sensors.ray_caster.ray_caster] No prims found for pattern: /World/Robot/base/collisions/base,, 只有我新建的mesh才能被扫描
Have you tested the RayCasterCamera? I encountered data errors after replacing the core scripts. Error: [Error] [omni.physx.plugin] PhysX error: Failed to unload CUDA module data, returned 710., FILE /builds/omniverse/physics/physx/source/cudamanager/src/CudaContextManager.cpp, LINE 804
I was wondering if there’s any update on when it might be merged into the main branch. I’ve run into the same issue when testing the RayCasterCamera after updating the core scripts:
[Error] [omni.physx.plugin] PhysX error: Failed to unload CUDA module data, returned 710., FILE /builds/omniverse/physics/physx/source/cudamanager/src/CudaContextManager.cpp, LINE 804
Is there anything I can do on my end to help move this forward, or any interim workaround you’d recommend?
Thanks for your time and efforts! 🙏
Thanks for the feedback everyone! I'll have more time starting next week to update this merge request with the current main branch code.
I was wondering if this feature is still being worked on? I recently implemented the same feature [only modify raycaster.py without any loop during raycasting ] and maybe I can contribute something to it.
Thanks for the feature option, there is a new PR covering the multi-mesh integration with a more tuned implementation, closing this now in favor of #3298