Genesis
Genesis copied to clipboard
[Feature]: Visualizing Marking Objects in the Preview Window Without Affecting Depth or Semantic Segmentation Maps Acquired by the Camera
What feature or enhancement are you proposing?
I am proposing a new feature that introduces a special type of object, similar to an "InvisibleMarker," within the simulation environment. These objects would have the following characteristics:
Visible in the preview window: They can be seen and manipulated during the scene design and debugging phases.
Invisible to the camera during simulation: They are ignored during the actual simulation runtime, specifically during the rendering process for generating data like depth maps and semantic segmentation maps.
This enhancement aims to provide a way to include visual aids and markers in the simulation environment without contaminating the data captured by virtual sensors.
Motivation
This feature addresses several key issues in simulation-based development:
Facilitating Development and Debugging: When setting up complex simulations, developers often use temporary objects or markers to help with positioning, alignment, and general scene layout. Currently, these objects interfere with sensor data.
Maintaining the Accuracy of Simulation Data: For tasks like robot training and validation, accurate sensor data (e.g., depth maps, semantic segmentation) is crucial. The presence of unwanted objects can introduce noise and inaccuracies, leading to flawed training or testing.
Reducing Unnecessary Repetitive Operations: Currently, users must manually add and remove these temporary objects, or implement workarounds, which is time-consuming and error-prone.
Potential Benefit
The primary benefit of this feature is to improve the workflow and accuracy of simulation-based development. It would provide a cleaner, more intuitive way to:
Visualize and debug simulations: Developers can easily add visual aids without affecting the integrity of sensor data.
Generate more accurate training/testing data: By removing unwanted objects from sensor outputs, the quality of data used for machine learning and validation is improved.
Streamline the development process: The need for manual workarounds is reduced, saving time and effort
What is the expected outcome of the implementation work?
The expected outcome is a new object type or property within the simulation environment that allows users to designate objects as "invisible" to virtual sensors.
Specifically, this would involve((maybe):
-
A new property or tag: A mechanism to mark an object as an "InvisibleMarker" (or similar).
-
Rendering pipeline modification: The rendering pipeline would be updated to exclude these objects from the data generated by virtual cameras (specifically, depth maps, semantic segmentation, and potentially other sensor outputs).
Additional information
No response