Azure_Kinect_ROS_Driver
Azure_Kinect_ROS_Driver copied to clipboard
Compressed Image Transport - PNG compression requires 8/16-bit encoded color format (input format is: 32FC1)
Describe the bug
I run Azure_Kinect_ROS_Driver in my laptop locally and it works well.
However, when I run the ros master node in a remote machine, connect my laptop with remote machine with wire , still connect kinect with my laptop and launch the kinect, error present.
When I set the depth_unit as 32FC1 , I got
[ERROR] [1692888908.917343914]: Compressed Image Transport - PNG compression requires 8/16-bit encoded color format (input format is: 32FC1)
[ERROR] [1692888908.917822999]: cv_bridge exception: '[32FC1] is not a color format. but [bgr8] is. The conversion does not make sense'
When I set the depth_unit as 16UC1, I got
ERROR] [1692889861.851759845]: cv_bridge exception: '[16UC1] is not a color format. but [bgr8] is. The conversion does not make sense'
[ERROR] [1692889870.630862864]: Compressed Depth Image Transport - Compression requires single-channel 32bit-floating point or 16bit raw depth images (input format is: bgra8).
Then image related topic publish speed become really slower.
To Reproduce Steps to reproduce the behavior:
- run roscore remote computer, and set correct ROS_MASTER_URI and ROS_IP in local laptop, connect local laptop and remote computer with a network cable
- launch my camera drive launch file (which would be attached in Additional context part).
- see error.
- try see image related topic in RViz, and observe lagging.
Expected behavior azure_kinect_ros_dirver publish image normally
Logs as above
Desktop (please complete the following information):
- OS: Ubuntu
- Version 20.04
Additional context my launch file:
<launch>
<arg name="tf_prefix_0" default="cam0_"/>
<arg name="overwrite_robot_description" default="false" /> <!-- Flag to publish a standalone azure_description instead of the default robot_descrition parameter-->
<arg name="depth_enabled" default="true" /> <!-- Enable or disable the depth camera -->
<arg name="depth_mode" default="WFOV_UNBINNED" /> <!-- Set the depth camera mode, which affects FOV, depth range, and camera resolution. See Azure Kinect documentation for full details. Valid options: NFOV_UNBINNED, NFOV_2X2BINNED, WFOV_UNBINNED, WFOV_2X2BINNED, and PASSIVE_IR -->
<arg name="depth_unit" default="16UC1" /> <!-- Depth distance units. Options are: "32FC1" (32 bit float metre) or "16UC1" (16 bit integer millimetre) -->
<arg name="color_enabled" default="true" /> <!-- Enable or disable the color camera -->
<arg name="color_format" default="bgra" /> <!-- The format of RGB camera. Valid options: bgra, jpeg -->
<arg name="color_resolution" default="3072P"/> <!-- Resolution at which to run the color camera. Valid options: 720P, 1080P, 1440P, 1536P, 2160P, 3072P -->
<arg name="fps" default="5" /> <!-- FPS to run both cameras at. Valid options are 5, 15, and 30 -->
<arg name="point_cloud" default="true" /> <!-- Generate a point cloud from depth data. Requires depth_enabled -->
<arg name="rgb_point_cloud" default="true" /> <!-- Colorize the point cloud using the RBG camera. Requires color_enabled and depth_enabled -->
<arg name="point_cloud_in_depth_frame" default="true" /> <!-- Whether the RGB pointcloud is rendered in the depth frame (true) or RGB frame (false). Will either match the resolution of the depth camera (true) or the RGB camera (false). -->
<arg name="required" default="false" /> <!-- Argument which specified if the entire launch file should terminate if the node dies -->
<arg name="sensor_sn" default="" /> <!-- Sensor serial number. If none provided, the first sensor will be selected -->
<arg name="recording_file" default="" /> <!-- Absolute path to a mkv recording file which will be used with the playback api instead of opening a device -->
<arg name="recording_loop_enabled" default="false" /> <!-- If set to true the recording file will rewind the beginning once end of file is reached -->
<arg name="body_tracking_enabled" default="false" /> <!-- If set to true the joint positions will be published as marker arrays -->
<arg name="body_tracking_smoothing_factor" default="0.0" /> <!-- Set between 0 for no smoothing and 1 for full smoothing -->
<arg name="rescale_ir_to_mono8" default="false" /> <!-- Whether to rescale the IR image to an 8-bit monochrome image for visualization and further processing. A scaling factor (ir_mono8_scaling_factor) is applied. -->
<arg name="ir_mono8_scaling_factor" default="1.0" /> <!-- Scaling factor to apply when converting IR to mono8 (see rescale_ir_to_mono8). If using illumination, use the value 0.5-1. If using passive IR, use 10. -->
<arg name="imu_rate_target" default="0"/> <!-- Desired output rate of IMU messages. Set to 0 (default) for full rate (1.6 kHz). -->
<arg name="node_start_delay" default="0" /> <!-- delay for second camera in seconds -->
<!-- master camera -->
<group ns="cam0">
<rosparam param="disable_pub_plugins">
- 'image_transport/compressedDepth'
- 'image_transport/theora'
</rosparam>
<param name="azure_description"
command="xacro $(find azure_kinect_ros_driver)/urdf/azure_kinect.urdf.xacro tf_prefix:=$(arg tf_prefix_0)" />
<node name="joint_state_publisher_azure" pkg="joint_state_publisher" type="joint_state_publisher">
<remap from="robot_description" to="azure_description" />
</node>
<node name="robot_state_publisher_azure" pkg="robot_state_publisher" type="robot_state_publisher">
<remap from="robot_description" to="azure_description" />
</node>
<node pkg="azure_kinect_ros_driver" type="node" name="azure_kinect_ros_driver0" output="screen" required="$(arg required)" respawn="true" >
<param name="sensor_sn" type="string" value="000200514512" />
<param name="depth_enabled" type="bool" value="$(arg depth_enabled)" />
<param name="depth_mode" type="string" value="$(arg depth_mode)" />
<param name="depth_unit" type="string" value="$(arg depth_unit)" />
<param name="color_enabled" type="bool" value="$(arg color_enabled)" />
<param name="color_format" type="string" value="$(arg color_format)" />
<param name="color_resolution" type="string" value="$(arg color_resolution)" />
<param name="fps" type="int" value="$(arg fps)" />
<param name="point_cloud" type="bool" value="$(arg point_cloud)" />
<param name="rgb_point_cloud" type="bool" value="$(arg rgb_point_cloud)" />
<param name="point_cloud_in_depth_frame" type="bool" value="$(arg point_cloud_in_depth_frame)" />
<param name="tf_prefix" type="string" value="$(arg tf_prefix_0)" />
<param name="recording_file" type="string" value="$(arg recording_file)" />
<param name="recording_loop_enabled" type="bool" value="$(arg recording_loop_enabled)" />
<param name="body_tracking_enabled" type="bool" value="$(arg body_tracking_enabled)" />
<param name="body_tracking_smoothing_factor" type="double" value="$(arg body_tracking_smoothing_factor)" />
<param name="rescale_ir_to_mono8" type="bool" value="$(arg rescale_ir_to_mono8)" />
<param name="ir_mono8_scaling_factor" type="double" value="$(arg ir_mono8_scaling_factor)" />
<param name="imu_rate_target" type="int" value="$(arg imu_rate_target)"/>
<param name="wired_sync_mode" value="0"/>
<param name="subordinate_delay_off_master_usec" value="0"/>
</node>
</group>
</launch>