并发的情况下,考虑根据并发数,依次送入任务的流程
最后notify 一次,应该能极大的加速
优化后
[2024-03-10 18:12:26.013] [test_performance_01]: time counter is : [8992.88] ms [2024-03-10 18:12:35.042] [test_performance_01]: time counter is : [9028.09] ms [2024-03-10 18:12:43.957] [test_performance_01]: time counter is : [8914.46] ms [2024-03-10 18:12:52.957] [test_performance_01]: time counter is : [8999.67] ms [2024-03-10 18:13:01.871] [test_performance_01]: time counter is : [8913.16] ms
优化前
[2024-03-10 18:15:00.071] [test_performance_01]: time counter is : [12929.17] ms [2024-03-10 18:15:13.053] [test_performance_01]: time counter is : [12980.45] ms [2024-03-10 18:15:25.998] [test_performance_01]: time counter is : [12944.89] ms [2024-03-10 18:15:38.660] [test_performance_01]: time counter is : [12661.35] ms [2024-03-10 18:15:51.366] [test_performance_01]: time counter is : [12704.78] ms
如果采用这种方法,性能反而是下降的
CVoid GDynamicEngine::parallelRunAll() {
parallel_finished_size_ = 0;
for (auto element : total_element_arr_) {
const auto& exec = [element, this] {
if (unlikely(cur_status_.isErr())) {
/**
* 如果已经有异常逻辑,
* 或者传入的element,是已经执行过的了(理论上不会出现这种情况,由于提升性能的原因,取消了atomic计数的逻辑,故添加这一处判定,防止意外情况)
* 则直接停止当前流程
*/
return;
}
auto status = element->fatProcessor(CFunctionType::RUN);
if (status.isErr()) {
cur_status_ += status;
}
auto result = parallel_finished_size_.fetch_add(1, std::memory_order_relaxed) + 1;
if (result >= total_end_size_ || cur_status_.isErr()) {
CGRAPH_UNIQUE_LOCK lock(lock_);
cv_.notify_one();
}
};
thread_pool_->commit(exec, calcIndex(element));
}
{
CGRAPH_UNIQUE_LOCK lock(lock_);
cv_.wait(lock, [this] {
/**
* 遇到以下条件之一,结束执行:
* 1,执行结束
* 2,状态异常
*/
return (parallel_finished_size_ >= total_end_size_) || cur_status_.isErr();
});
}
}