oneTBB icon indicating copy to clipboard operation
oneTBB copied to clipboard

Releasing Ressource in TBB Node While Loop

Open inline42 opened this issue 3 years ago • 7 comments

Suppose I am doing lambda inside a input_node:

queue is a concurrent_queue.

const auto fetcher = [this](flow_control& control) { if (kill) control.stop(); shared_ptr<Res> res; while (!queue.try_pop(res) && !kill) {}; return res; };

How to release ressources in while loop please? I have the option to response with a nullptr and it's handled, should I do that instead (keep responding with null ptr)?

inline42 avatar May 01 '22 13:05 inline42

What resources do you want to release in the while loop? If try_pop returns false, the queue was empty and nothing is assigned to res. Only if try_pop returns true is the result assigned to res. Can you explain your issue a bit more?

vossmjp avatar May 04 '22 17:05 vossmjp

I mean the thread will be doing the while looping while the queue is empty and this what I mean by ressources, the thread doing nothing but just looping because the queue is empty, I could make it sleep for example or use std::this_thread::yield() but I am not sure if there is a better way?

inline42 avatar May 06 '22 00:05 inline42

Ok, I understand now. Yes, it is not good practice to block, sleep, or poll inside of the body of an input_node. As you have correctly pointed out, doing that holds onto a TBB worker thread that could be used elsewhere.

Is your application using the queue to create a producer-consumer relationship? If so, can you completely remove the queue and instead call try_put directly on the successor of the input_node whenever a new item needs to be processed? Essentially, you would replace the push to the queue with a try_put to the successor node.

Or, if you are pushing the new item to the queue from within another graph node, you might be able to use an async_node to separate the asynchronous part that gets/generates the item from the send of the item to the successor. An async_node can be used to release a TBB worker while waiting for asynchronous work to complete. Again, if you use an async_node to generate the item, you’d probably completely remove the input_node and send to its successor. The async_node will only make sense if you are currently pushing to the queue from with another node in the graph though.

Do you think either of these approaches might work?

vossmjp avatar May 06 '22 13:05 vossmjp

Thanks so much!! Will check the async_node and see if I can include it. I think maybe I describe in details what I am doing. The goal is to implement a join node that is a queue, call it run_time_queue_node, this new type of nodes can join other number of nodes at run time, meaning the number of input nodes (that there result will be queued is not known at compile time otherwise I would have used some other queue TBB node for this). So the run_time_queue_node will have a vector of function_nodes as member data that will each be connected to an external "input" nodes (not necessarly input_node) and each of the nodes in the vector will be pushing in one queue (the latter is a member data i.e. concurent_queue of run_time_queue_node), another input_node this time is a member data of run_time_queue_node and it just pops the results from the universal queue (the member data of run_time_queue_node which is a concurrent_queue) and sends it to the next edged node. The input_node member data is what I am referring to in my original question above: """ Suppose I am doing lambda inside a input_node:

queue is a concurrent_queue.

const auto fetcher = [this](flow_control& control) { if (kill) control.stop(); shared_ptr res; while (!queue.try_pop(res) && !kill) {}; return res; };

How to release ressources in while loop please? I have the option to response with a nullptr and it's handled, should I do that instead (keep responding with null ptr)? """

BTW this is somehow related to https://stackoverflow.com/questions/45218702/join-node-graph-flow-construction

inline42 avatar May 06 '22 15:05 inline42

Thanks so much btw :) I ended up using try_put, works fine, but still have a problem when it's a priority queue? Any idea how to handle priority queue please? Is there a try put with a specific priority?

inline42 avatar May 08 '22 08:05 inline42

There are two approaches to priorities in the flow graph. First there are node priorities but these apply to the nodes themselves. It seems like you are needing to assign priorities to messages not node, so this may not apply to your case. There is also priority_queue_node. Maybe you can try_put to a priority_queue_node that precedes the node. You must be careful when using a priority_queue_node though because it only compares the messages that are currently in the queue. So, if you connect it to a node that always immediately consumes things, like a function_node that is parallel, or even a serial function_node that buffers at its input, then the messages immediately flow through the queue and nothing is really prioritized. The priority_queue_node is really only effective if it is connected to a rejecting serial function_node since that node type doesn’t consume messages until it is available to process a message and until then it leaves all messages in the upstream priority queue.

vossmjp avatar May 10 '22 21:05 vossmjp

Very useful stuff, thanks so much!! I really appreciate our time where you can just ask questions like these and get answers. Thanks again. To confirm in the case of the rejecting function node, the msg will be held by the prioriy queue (sender) and not lost forever if rejected? And in general this behaviour of holding the value if rejected by the receiver is always true?

inline42 avatar May 10 '22 21:05 inline42

@kboyarinov Could you please take a look?

isaevil avatar Oct 05 '22 11:10 isaevil