Multi-Femto Mega camera connecting issue from the network
First of all, thanks for the nice SDK. I have setup the environment and run the OrbeecSDK_ROS2 in the Ubuntu machine and working fine. However, I didn't see the launch file for connecting multiple Femto Mega camera from the network. There is a launch file for multi-camera connected through USB but couldn't see for network ones. Could you please suggest about it how we can get it? Also, I would like to know about the compressed images from the camera. What compression technique is used and can we set the compression level during launch time?
Anil Bhujel, Research Associate, Michigan State University
Hi @Anil-Bhujel
Thank you for using the Femto Mega camera. We're glad to hear that the OrbeecSDK_ROS2 is running smoothly on your Ubuntu machine.
Regarding your question about connecting multiple Femto Mega cameras over a network, I would first like to know how many cameras you plan to use and the resolution at which you plan to capture data. Our RGB streams use compression techniques such as MJPEG, H.264, and H.265, while depth streams are not compressed and are only available in Y16 format. Please note that H.264 and H.265 compression methods are only supported over the network.
We will also soon be able to give you an example of setting up multiple cameras over the network.
Best regards,
@jian-dong Thank you for your prompt response. Indeed, we are working to build a large computer vision dataset for Precision Livestock Farming (PLF) to attract computer vision and AI communities towards PLF. We do series of experiments with different combination of camera numbers. Currently, we have 6 in our laboratory but we could use it 2 or more at a time and plan to record Compressed color image and uncompressed depth topics from multiple cameras into single Rosbag2 file. I am happy to hear that you are willing to assist us. Overall and to your earlier response, we have following queries and requirements,
- We have to synchronize the timestamp for all cameras within a network with the system clock or camera clock.
- How we can select the compression techniques among MJPEG, H.264, and H.265 and which one you recommend along with compression ratios (if any). Please keep in mind that our experimental livestock is located in remote and there is limited space available to store long run videos with high frame rate. So plan to compress as much as possible while recording and revert to original quality for analysis in the lab.
- Can we get back the compressed images to original quality during playback of Rosbag2 file for analysis using any third-party decoder algorithm?
- Does OrbbecSDK_ROS2 utilizes the NVIDIA GPU similar to Isaac_ros_compression?
- To optimize the recording spaces we plan to record videos at full fps when the animal is there with certain motion activities but at reduced fps (let's say 1 fps) when there is no movement. Is there any mechanism you have applied to detect motion or such kind of event and activates camera accordingly?
Sorry for a bit long queries, this is quite critical experiments and we are happy to use Orbbec camera. FYI, we set the IPs to each camera connected via network for single camera. When we connected all cameras in the network and run ros2 launch orbbec_camera femto_mega.launch.py enumerate_net_device:=true, it only launches single camera having the least IP address.
Once again thank you for your time and support.
Hi @Anil-Bhujel Thank you for your detailed feedback. Since some of the technical questions you've raised are quite complex, I will consult with more specialized colleagues to provide you with a comprehensive response. Meanwhile, to better assess your needs, please provide the following information:
- Maximum Number of Cameras: How many cameras do you plan to use simultaneously at most during your experiments?
- Resolution and Data Streams: What resolution and frame rate will you use? Do you need only color images, or also depth and IR data streams?
- Recording Duration: How long do you plan to record during each experiment?
- Desired Bag File Size: Considering your storage limitations, what is the maximum acceptable size for the bag file?
This information will help us conduct a more accurate assessment. Thank you for your cooperation, and we will get back to you as soon as possible.
Hi @jian-dong Please find the response here in italic text.
- Maximum Number of Cameras: How many cameras do you plan to use simultaneously at most during your experiments? Mostly 2 but could be 4 at max.
- Resolution and Data Streams: What resolution and frame rate will you use? Do you need only color images, or also depth and IR data streams? In animal movement time maximum possible frames with high quality is preferred (but depending upon the other constraints, at least 15 fps with HD quality) and without animal movement period HD frame with 1 fps is enough. We can compress the images before storing and minor loss in the reconstruction will be acceptable. Yes, we need all RGB, depth, and IR images along with other metadata (timestamp, frame rate, resolution, and other camera configuration).
- Recording Duration: How long do you plan to record during each experiment? At least a week for an experiment but the bag file must be manageable with a minute or 5 or 10 minutes duration. Although we are going to record video throughout the animal cycle, unavoidable breaking in the video recording is acceptable.
- Desired Bag File Size: Considering your storage limitations, what is the maximum acceptable size for the bag file? We are planning to record one-minute-long Bag file for easy processing. However, we can go longer than it (5-minutes, 10-minutes) as we have 100TB NAS device to store the data but not prefer larger single bag file than 10GB.
I hope it clarifies you. If you have further let me know.
@jian-dong Just in case of data stream, we can exclude IR stream if it increases the complexity and memory.
In our previous test, the data size per second was:
RGB: 1920x1080, 30fps, H264, data size: 2.59MB Depth: 640x576, 30fps, Y16, data size: 21.09MB (640x576x2x30)
From this, we estimated that the data size for 4 cameras over 10 minutes would be 55.5GB. If the frame rate is reduced to 15fps, the data size would be: 55.5 / 2 = 27.77GB.
If a lossless compression algorithm like RVL is used, with a compression ratio of 1/3, the total data size for 10 minutes should be reduced to around 9GB. @jian-dong Does rosbag2 support RVL compression and decompression during playback?
Hi @zhonghong322 Thank you for the information. That would be great achievement if we could reduce the 10 minutes videos from 4 cameras to ~9GB. I am still struggling to connect multi-camera from network IP. I hope your great team will support on it.
Thanks in advance
I believe RVL is not being supported but you can use the MCAP file format as shown in the rosbag2 repository to properly configure your recordings and obtain better results through testings with different compression options as well
Hi @Danilrivero, Thanks for the information. I will check out that. @jian-dong I am still waiting for the network camera launching setup.
@jian-dong, Also, I have a query regarding the camera's power requirement. I checked the camera's power adapter and found 12V 2.0A labelled there, which means less than 30W, but we connected it with the Cqenpr 19 port PoE switch (30W port), and the camera's response latency was very high (even higher for depth scene). When we connected it to the 60W port, the camera's response time was way better. We found the generic PoE switch has a 30W max output per port. Do you have the testing result on a generic PoE switch (30W max per port)? Please suggest us as we are connecting all the cameras through PoE due to site-specific issues. Also, what could be the maximum cable length (Cat6e) we can use without compromising the camera performance?
Hi @Anil-Bhujel
I have added a sample launch file for multiple network cameras with Femto Mega. You will need to download the Orbbec Viewer first from this link: OrbbecSDK Download. Locate the IP configuration options (as shown in the image below) and set a unique IP address for each camera. Please don't enable DHCP. Connect all cameras to a network switch, and then connect the switch to your computer. Your computer's network configuration should be on the same subnet as your cameras' IP addresses. After configuring, please ping the IP addresses of your cameras to ensure they are reachable.
Once everything is set up, you can modify the launch sample file I provided to start multiple network cameras. Let’s focus on getting the multi-camera setup working first, and we can address further issues step-by-step. If you encounter any problems, please comment below on this issue. cc @xcy2011sky @zhonghong322 @jjiszjj
Hi @jian-dong, @xcy2011sky, @jjiszjj , @zhonghong322 Thank you very much for your sample code. It worked fine. Now, I connected only two cameras. Can I customize it for more than two cameras? Anyway, I will try it myself. I also tested the recording and playback of topics and found working well. Thank you for your kind support. Here I have attached the screenshot of some testing results.
Successfully tested for 4 network cameras.
@Anil-Bhujel Thank you for your feedback! I'm glad to hear that all four cameras are running smoothly. If you need any more help, please don't hesitate to reply directly to this issue. Due to the time difference, our response might be delayed, so we appreciate your understanding.
By the way, if you encounter any issues related to transmission efficiency, you can refer to the DDS tuning configuration documentation. For FastDDS, please refer to FastDDS Tuning Guide. For CycloneDDS, please check CycloneDDS Tuning Guide.
@Anil-Bhujel Regarding the power requirement issue, I will consult with our electronics team to give you a more professional response. We will get back to you today. I appreciate your patience! cc @zhonghong322
@jian-dong, Also, I have a query regarding the camera's power requirement. I checked the camera's power adapter and found 12V 2.0A labelled there, which means less than 30W, but we connected it with the Cqenpr 19 port PoE switch (30W port), and the camera's response latency was very high (even higher for depth scene). When we connected it to the 60W port, the camera's response time was way better. We found the generic PoE switch has a 30W max output per port. Do you have the testing result on a generic PoE switch (30W max per port)? Please suggest us as we are connecting all the cameras through PoE due to site-specific issues. Also, what could be the maximum cable length (Cat6e) we can use without compromising the camera performance? @Anil-Bhujel 1、Our hardware engineers tested a single Mega connected to a PSE (Power sourcing equipment)device with 802.3at (24W), and it works perfectly. However, we haven't tested multiple Megas connected to a single PSE device. If the 30W setup doesn't work, the issue might be on the PSE device side rather than with our Mega. Although the PSE device supports multiple PoE devices, if each device is operating simultaneously and drawing high power—around 20W—the total power supply of the PSE might not be sufficient. This is why the 60W setup performs better. 2、cable length (Cat6e) ,We have tested lengths over 15 meters, and it works normally. The Ethernet cable we are using is linked below. You can look for similar specifications on Amazon and try to choose a good one. https://detail.tmall.com/item.htm?id=643056487037&priceTId=2147820017274045837478980e142e&spm=a21n57.sem.item.4.5ae43903nUafcY&utparam=%7B%22aplus_abtest%22%3A%22774849bb4671fa08f2549c0b887469df%22%7D&xxc=ad_ztc&sku_properties=1627207%3A20582712614
@jian-dong Thank you for your valuable resource documents. And, I understand the different timezones but it's fine. @zhonghong322 Thanks for your information. The cable length between PoE switch and camera could be less than 15M but from switch to computer could be at least 70M. So not confident enough to get good data transmission. Anyway, we will try it and update the results. @Danilrivero Default MCAP didn't reduce file size and I didn't try with storage preset profile. They mentioned zstd_fast is not recommended for long-term storage (as we have long-term storage). And I doubt on retrieving quality data once we compressed. Anyway thanks for your suggestion.
@jian-dong, How we can synchronize the camera frames. Meaning that the frame should be captured at same time and frames in the recorded bag file must have same timestamp in header. Now, we found a bit random timestamp (see the attached screenshots). Can we set it by passing the time_domain launching parameter as "global"? We have to analyze the frames taken from two sides of an animal using two cameras. Please suggest us for the best configuration. What is the meaning of Modes Free Run, Standalone, Primary, and Secondary in synchronization configuration tab in OrbbecViewer?
https://github.com/orbbec/OrbbecSDK_ROS2?tab=readme-ov-file#launch-parameters can we leverage from that?
Our current recorded topics. You can see the ros2 timestamp at the bottom of the window and the color image from each camera at the right-side windows.
@jian-dong, How we can synchronize the camera frames. Meaning that the frame should be captured at same time and frames in the recorded bag file must have same timestamp in header. Now, we found a bit random timestamp (see the attached screenshots). Can we set it by passing the time_domain launching parameter as "global"? We have to analyze the frames taken from two sides of an animal using two cameras. Please suggest us for the best configuration. What is the meaning of Modes Free Run, Standalone, Primary, and Secondary in synchronization configuration tab in OrbbecViewer?
https://github.com/orbbec/OrbbecSDK_ROS2?tab=readme-ov-file#launch-parameters can we leverage from that?
Our current recorded topics. You can see the ros2 timestamp at the bottom of the window and the color image from each camera at the right-side windows.
![]()
@Anil-Bhujel For multi-device synchronization, please refer to this document. https://www.orbbec.com/docs/set-up-cameras-for-external-synchronization_v1-2/
Hi @Anil-Bhujel, Just to let you know, our team will be on National Day holiday from October 1st to October 7th. We will do our best to respond to your questions during this period, but please know that we may only be able to provide partial or delayed responses.
We appreciate your patience and understanding.
Hi @jian-dong It's ok have a nice holiday. @zhonghong322 We have ordered Sync hub pro and femto-mega sync adapter cables. Let's hope it will solve the issue. I will keep updating.
Hi @jian-dong FastDDS Tunning works for me. Now the the image coming from the ROS topic is nearly in real-time. Thanks for the suggestion.
Hi @jian-dong I faced an issue with selection of different compression technique in Femto Mega camera. I tried to select it from the OrbbecViewer. It shows the format and resolution has been set in log but it didn't save in the device permanently. When I close the OrbbecViewer and open and check the format, it will be MJPG as default. I have updated the firmware from 1.2.7 to 1.2.9 but the problem is still there. How we can select the different compression mode? Here are some screenshots. Alos, how we can select it from launch file or if we select it in the device, we don't need to pass it as launch argument?
faced an issue with selection of different compression technique in Femto Mega camera. I tried to select it from the OrbbecViewer. It shows the format and resolution has been set in log but it didn't save in the device permanently. When I close the OrbbecViewer and open and check the format, it will be MJPG as default. I have updated the firmware from 1.2.7 to 1.2.9 but the problem is still there. How we can select the different compression mode? Here are some screenshots. Alos, how we can select it from launch file or if we select it in the device, we don't need to pass it as launch argument?
You can configure the compression format in the ROS launch file. DeclareLaunchArgument('color_format', default_value='MJPG'),
Hi @zhonghong322
Thank you, but I think there is an issue with H264 and H265 color format. When I passed the parameter of MJPG in color_format or without color_format launch parameter (default) the node works fine and I can see the video in rqt, can save topics in ros2 bag file, can play back and also can see the ros2 topic echo /camera/color/image_raw/compressed. When I pass launch parameter as H264 or H265 in color_format, it starts the node but when I tried to visualize in rqt, record topics in ros2 bag, and echo the topic list in command line, the node started to give error like "Failed to convert frame to video frame". How we can record the compressed topic? And can we decode to original quality while playback?
Error on H264 and H265 while trying to record the topics or echo or visualize in rqt (Probably rqt couldn't decode the H264 compressed topic? But why we can't record the topic as well?)
Working launch command
Hi @zhonghong322 Thank you, but I think there is an issue with H264 and H265 color format. When I passed the parameter of MJPG in color_format or without color_format launch parameter (default) the node works fine and I can see the video in rqt, can save topics in ros2 bag file, can play back and also can see the ros2 topic echo /camera/color/image_raw/compressed. When I pass launch parameter as H264 or H265 in color_format, it starts the node but when I tried to visualize in rqt, record topics in ros2 bag, and echo the topic list in command line, the node started to give error like "Failed to convert frame to video frame". How we can record the compressed topic? And can we decode to original quality while playback? Error on H264 and H265 while trying to record the topics or echo or visualize in rqt (Probably rqt couldn't decode the H264 compressed topic? But why we can't record the topic as well?)
Working launch command
@jian-dong Can ROS support recording and playback of H264 data
@Anil-Bhujel A tool node for femto_mega decoding H264/H265 is provided under the femto_mega_h26x_decode branch. You can run this node to get the decoded video of H264/H265 and view the tool node from this link:https://github.com/orbbec/OrbbecSDK_ROS2/blob/femto_mega_h26x_decode/orbbec_camera/tools/mega_h26x_decode_node.cpp
Hi @jjiszjj Thank you so much. Finally got it.
Hi @zhonghong322 We received the Multi-Camera Sync Hub Pro and Multi-Camera Sync Hub Pro Adapter. Do we need any configuration in SDK too? Currently, I just use OrbbecViewer and configured synchronization by setting primary and secondary device and tested on two devices. The result is attached here. Can we avoid the differences in nanoseconds too?
@jjiszjj After I adding mega_h26x_decode_node.cpp to SDK and used color_format:=H264/H265, the camera node always published /camera/color/h26x_encoded_data even in color_format:=MJPG and without color_format launch parameter (default). However, the node provided log message as Format: OB_FORMAT_MJPG. Could you please check once.
I used ros2 launch orbbec_camera femto_mega.launch.py color_format:=MJPG enumerate_net_device:=true and ros2 launch orbbec_camera femto_mega.launch.py enumerate_net_device:=true
However, ros2 launch orbbec_camera femto_mega.launch.py color_format:=H264/H265 enumerate_net_device:=true seems working fine.
Also, do we get back that coded topic to original image quality by running decoding node?
Hi@Anil-Bhujel, thank you for your feedback.I have discovered this issue through your feedback and I will fix it in the near future.



