darknet_ros icon indicating copy to clipboard operation
darknet_ros copied to clipboard

Multiple camera sources?

Open tschmidt64 opened this issue 7 years ago • 17 comments

I am looking to run darknet_ros using multiple camera topic sources as inputs to yolo, with the outputs being distinguishable based on the specific camera source. For example, I would hope to have the message (BoundingBox.msg) format include something like "string CameraSource" so that I can tell which camera the detection is coming from.

One way I've thought of is to have two camera inputs in the subscriber section of ros.yaml. But I'm not really sure how I would go about modifying the message publication to include the camera source string

Another idea I've had is to just create two launch files under different namespaces, so that there are two darknets instances running simultaneously, publishing seperately to /darknet_ros_1/ and /darknet_ros_2/. This seems feasible but undesirably bulky, considering I have to run two darknets at the same time.

Any thoughts on how I would go about doing this?

tschmidt64 avatar Jan 19 '18 21:01 tschmidt64

Quick update: I got this working with the second method I described. However each instance uses ~3GB of GPU memory so it's not very scalable. Still looking into differentiating between camera sources on the same instance of darknet

tschmidt64 avatar Jan 19 '18 22:01 tschmidt64

hi, i m doing same work now, i had try the second method, and i m trying using multiple camera image as a batch to do detection. The GPU memory had not increase lot, but it was so slowing down a lot. if you had any new idea, please tell me...

JunCxy avatar Jan 20 '18 03:01 JunCxy

I'm doing method 1. I have 3 cams, but I only get ~11 FPS with one cam and Tiny-Yolo. How many FPS you get when you use method 2 with both cams?

I use a 12GB GeForce 1080Ti.

Cuky88 avatar Feb 01 '18 13:02 Cuky88

With method 2 I get 2 FPS more, but it is much more smoother now. With method 1 I had problems with threads and lagging cam. Unfortunately, it is still too slow, I need at least 20 FPS. Running Yolo outside ROS gives 22 FPS. BTW. one net uses ~550MB of GPU RAM.

Next steps I'm trying is to deactivate the drawing of boxes. I just need the coords but not the actual boxes drawn, only for debugging. But as soon as I set "enable_opencv" in ros.yaml to false, I can't see any images in rviz....

Cuky88 avatar Feb 02 '18 11:02 Cuky88

So my use case is similar. I have 2 cameras but I also have 1 preprocessing openCV call I need to make. My options are

2 camera drivers node to read in images, perform a single openCV call and republish 2 darknet_ros

or fork darknet_ros 2 camera drivers modified darknet_ros that alternates which camera it reads, performs the openCV call from the current image acquisition thread and set the published message to say which camera it is from

A native support for multiple camera sources would allow the first option to make a lot more sense, would still have the penalty cost of an extra serialization for my single function call. Also would make the second option significantly simpler to implement.

gftabor avatar Feb 02 '18 16:02 gftabor

Short Update: I deactivated the openCV part in the Darknet ROS-Node, since I use rviz for visualization. I also commented the part, where bounding boxes are drawn. Instead, I publish the bounding boxes on it's own topic and send this to another node. With deactivating the code regarding openCV, I reach 30 FPS per camera. This is a huge improvement. But I had to do a small adjustment in darknet itself. With dynamic reconfigure or the ros.yaml I can activate openCV code for debugging purpose and visualize the image with bounding boxes in rviz, but with 13 FPS.

Cuky88 avatar Feb 02 '18 20:02 Cuky88

i had rewriting the code, and don`t using multi-thread. Make 3 camera image as a batch input to the tiny-yolo. and i get about 15FPS of 3 image detection result in GTX 740Ti , more then 20FPS in GTX1050Ti , just cost about 400MB GPU memory.

JunCxy avatar Feb 06 '18 02:02 JunCxy

So I was planning on "hacking" the timestamp of the image I send to darknet_ros. Was thinking I would add offsets depending on camera source 1 year for first camera and 2 years for second camera. Then just have another node correct the offset and republish images to different topics but the issue is the boundingBox msgs published don't use the timestamp that the image had, like I expected. If that minor thing is fixed it would be a viable way to get "camera source" information.

https://github.com/leggedrobotics/darknet_ros/issues/63

gftabor avatar Feb 07 '18 16:02 gftabor

Hello. We have a 4 camera setup on our drone. We use the "frame_id" of the imageHeader to get which camera the detections come from. We stumbled into the same issues with the header and the detections being mismatched and potentially fixed it. Would like it if y'all could test it on y'all's setups.

PR here: https://github.com/leggedrobotics/darknet_ros/pull/113 Branch for testing here: https://github.com/Texas-Aerial-Robotics/darknet_ros/tree/headerFixForUpsteam

umer936 avatar Aug 28 '18 19:08 umer936

@umer936 Hey Umer i checked your repo. I wanted to ask how are you realising the input from 4 cameras ? I mean your ros.yaml doesnt have camera reading from 4 different subscribers. So are you using 4 instances of darknet at once?

bansilol avatar Oct 27 '20 12:10 bansilol

@bansilol It's been a few years since I looked at this, but I believe we were using this to do so: https://github.com/Texas-Aerial-Robotics/camera_signalman We cycled through the cameras with their headers and feed each frame into darknet. Then use the camera number to figure out where the detection came from. Here's what needs to be changed to read in the camera streams into Darknet-Ros https://github.com/Texas-Aerial-Robotics/Darknet-Ros/blob/89a8ded01da43a8ea54f4fd796e6fce944294478/darknet_ros/config/ros.yaml#L4

umer936 avatar Oct 29 '20 04:10 umer936

@umer936 Hey Umer, as you told before i followed the steps ans in the ros.yaml file of the camera_signalman i changed the subscriber camera topics to the topics that are published by my camera launch files also i changed the frame ids. Everything is running smoothly but in the output of the camera signalman the topic camera_signalman/image_raw has no image and when i launch my tiny_yolo it stops at waiting for image. Could you suggest what could be wrong here?

bansilol avatar Nov 03 '20 18:11 bansilol

Hi altruists, Could anyone please let me know how I can subscribe multiple image topics in my single YOLOv5 detector? I am not using darknet YOLO package. I don't want to run multiple instances of yolo node in my machine either. I am using this repo https://github.com/Shua-Kang/ros_pytorch_yolov5 @umer936 @tschmidt64 @gftabor @Cuky88 @JunCxy @bansilol

TIA

RabsDRocker avatar Apr 17 '22 20:04 RabsDRocker

Could anyone please let me know how I can subscribe multiple image topics in my single YOLOv5 detector?

@RabsDRocker This really is unrelated to this repo. But you need to subscribe to both cameras and republish them as a single topic. Luckily simple tools like this in ROS already exist http://wiki.ros.org/topic_tools/mux

Then when your yolo node outputs bounding boxes you will need to use the bounding boxes image_header field and its frame to distinguish which bounding boxes correspond to which camera.

gftabor avatar Apr 17 '22 20:04 gftabor

Could anyone please let me know how I can subscribe multiple image topics in my single YOLOv5 detector?

@RabsDRocker This really is unrelated to this repo. But you need to subscribe to both cameras and republish them as a single topic. Luckily simple tools like this in ROS already exist http://wiki.ros.org/topic_tools/mux

Then when your yolo node outputs bounding boxes you will need to use the bounding boxes image_header field and its frame to distinguish which bounding boxes correspond to which camera.

@gftabor Thanks a lot for your comment. mux package seems to be useful but i don't get to see my second camera topic detecting in the detector. Mux was supposed to be switching my two camera topics.

I just used this example to do this and subscribed the output topic to my detector but only one camera was detected

rosrun topic_tools mux sel_cmdvel auto_cmdvel joystick_cmdvel mux:=mux_cmdvel

RabsDRocker avatar Apr 17 '22 22:04 RabsDRocker

rosrun topic_tools mux sel_cmdvel auto_cmdvel joystick_cmdvel mux:=mux_cmdvel

Is this actually the command you used? It should have been configured for each different camera topic.

Should have been something like rosrun topic_tools mux combined_image_topic camera1/image camera2/image mux:=mux_images

And then you configure your ros yolo node to subscribe to combined_image_topic

gftabor avatar Apr 18 '22 05:04 gftabor

rosrun topic_tools mux sel_cmdvel auto_cmdvel joystick_cmdvel mux:=mux_cmdvel

Is this actually the command you used? It should have been configured for each different camera topic.

Should have been something like rosrun topic_tools mux combined_image_topic camera1/image camera2/image mux:=mux_images

And then you configure your ros yolo node to subscribe to combined_image_topic

@gftabor Thanks again for your reply. No that was not the command that I used. I had to configure my each camera topic and changed the command accordingly like you mentioned in your previous comment. I was also able to subscribe the combined topic in my YOLO node. It only shows the first camera. Actually republished topic selects the first camera topic by default.

RabsDRocker avatar Apr 18 '22 13:04 RabsDRocker