depthai-ros
depthai-ros copied to clipboard
[BUG] Detection output on mono camera looks not to scale
Version: 1 commit behind iron
commit hash cf5d2aaee9117298ea1632c98ffd36a5d7d535ac
Issue
- Trying to run a
Depth
pipeline on OAK-D Pro W camera. - What I want is depth output with a yolo detection running on the left camera.
- The detections look they are not to scale, is there a parameter I need to set to scale them?
Steps to reproduce
- Set config (camera.yaml) as below
/**:
ros__parameters:
camera:
i_enable_imu: true
i_enable_ir: true
i_nn_type: none
i_pipeline_type: Depth
left:
i_publish_topic: true
i_enable_nn: true
i_disable_node: false
i_resolution: '720P'
left_nn:
i_board_socket_id: 1
i_nn_config_path: depthai_ros_driver/yolo
- Modify launch file to add the
depthai_filters::Detection2DOverlay
anddetection_labels
, these have to be set up for the overlay as well as the default are mobilenet ssd labels.
detection_labels = [
"person",
"bicycle",
"car",
"motorbike",
"aeroplane",
"bus",
"train",
"truck",
"boat",
"traffic light",
"fire hydrant",
"stop sign",
"parking meter",
"bench",
"bird",
"cat",
"dog",
"horse",
"sheep",
"cow",
"elephant",
"bear",
"zebra",
"giraffe",
"backpack",
"umbrella",
"handbag",
"tie",
"suitcase",
"frisbee",
"skis",
"snowboard",
"sports ball",
"kite",
"baseball bat",
"baseball glove",
"skateboard",
"surfboard",
"tennis racket",
"bottle",
"wine glass",
"cup",
"fork",
"knife",
"spoon",
"bowl",
"banana",
"apple",
"sandwich",
"orange",
"broccoli",
"carrot",
"hot dog",
"pizza",
"donut",
"cake",
"chair",
"sofa",
"pottedplant",
"bed",
"diningtable",
"toilet",
"tvmonitor",
"laptop",
"mouse",
"remote",
"keyboard",
"cell phone",
"microwave",
"oven",
"toaster",
"sink",
"refrigerator",
"book",
"clock",
"vase",
"scissors",
"teddy bear",
"hair drier",
"toothbrush",
]
detection_viz_node = ComposableNode(
package="depthai_filters",
plugin="depthai_filters::Detection2DOverlay",
parameters=[
{"label_map": detection_labels},
],
remappings=[
("rgb/preview/image_raw", "/oak/left/image_raw"),
("nn/detections", "/oak/left_nn/detections"),
],
)
...
ComposableNodeContainer(
name=f"{name}_container",
namespace=namespace,
package="rclcpp_components",
executable="component_container",
composable_node_descriptions=[
ComposableNode(
package="depthai_ros_driver",
plugin="depthai_ros_driver::Camera",
name=name,
namespace=namespace,
parameters=[
params_file,
tf_params,
parameter_overrides,
{"left_nn.i_label_map": detection_labels}
],
),
detection_viz_node,
],
arguments=["--ros-args", "--log-level", log_level],
prefix=[launch_prefix],
output="both",
),
- Build and launch
ros2 launch depthai_ros_driver camera.launch.py
Below is how the overlay looks, the person doesn't look like this on camera, trust me and we don't have ghosts ;)