Use collaborative_once_flag in LRU cache
I mentioned this here: https://github.com/oneapi-src/oneTBB/pull/939#issuecomment-1275237601
I'd recommend reworking concurrent_lru_cache it to use tbb::collaborative_once_flag (that Henry and I instigated) to avoid a potential moonlighting deadlock in operator[]:
if (op.is_new_value_needed()){
op.result().second.my_value = my_value_function(k);
__TBB_store_with_release(op.result().second.my_is_ready, true);
}else{
tbb::internal::spin_wait_while_eq(op.result().second.my_is_ready,false); //< Could cause moonlighting deadlock, IIRc.
}
I think that would mean changing
struct map_value_type {
value_type my_value;
ref_counter_type my_ref_counter;
typename lru_list_type::iterator my_lru_list_iterator;
bool my_is_ready;
map_value_type(...
};
to
struct map_value_type {
value_type my_value;
ref_counter_type my_ref_counter;
typename lru_list_type::iterator my_lru_list_iterator;
tbb::collaborative_once_flag my_once_flag;
map_value_type(...
};
and then operator[] is more like
auto& map_value = op.result().second;
tbb::collaborative_call_once(map_value.my_once_flag, [&map_value, &my_value_function, &k] {
map_value.my_value = my_value_function(k);
});
that way rather than spin-waiting if another thread is calculating (when the "other thread" could potentially be this thread due to moonlighting!) the waiting thread would (a) collaborate to help get expensive work done and (b) workers would be isolated to not moonlight while they are working on evaluating my_value_function.
@isaevil is this issue still relevant?
@arunparkugan it is, it is not an issue but an enhancement suggestion.
@arunparkugan it is, it is not an issue but an enhancement suggestion.
That's right. LRU cache was what got me thinking about this sort of caching, which lead me to write my own which lead me to come up with what became tbb::collaborative_once_flag.