brpc
brpc copied to clipboard
Bthread: Lots of CPU cost on `do_futex` when running lightweight tasks
Describe the bug (描述bug)
Hi, we are using brpc and bthread as our rpc framework and runtime. Our tasks are lightweight, the workload is like handling a request and read some data in memory, usally finished in ~10ms.
Under 96Core CPU, we config out bthread worker 106. And when the workload is lots of lightweight request (about 300K request per second), pprof shows that do_futex takes about 25% of CPU runtime. And bthread_worker_usage is only 15-20, bthread_signal_second is also high, bthread_count is about 1600, and our server qps is 300k. Some information in pprof can be listed as follow:
bthread::TaskGroup::end_sched 18.09%
- steal_task 5.61%
- sched_to 12.07%
- ready_to_run 11.85%
- do_futex 11.16% (call futex_wake)
- _raw_spin_unlock_irqrestore 10.42%
and:
bthread::TaskGroup::run_main_task 14.4%
- TaskGroup::wait_task 4.87%
- steal_task 4.63%
- futex_wait 8.42%
According to link, _raw_spin_unlock_irqrestore because interrupt is off. But it still takes too much time handling this than we expected on scheduling.
We guess that we produce too many lightweight bthread, and scheduling them will notify lots of TaskGroup workers. After changing bthread_worker to 60 and restart the server, the cost of scheduling reduce a lot. But restarting all machines is troblesome for us. And we think that configing worker number as same as hardware_concurrency is suitable for all different kinds of workloads.
How can we handling this problem? I found bthread can only add_worker dynamically, but cannot remove spare worker, which can solve this problem easily. Using a bthread pool may help to reducing the signal and bthread scheduling, but writing a ThreadPool over Fiber is really a dirty work.
To Reproduce (复现方法)
Expected behavior (期望行为)
The bthread can reduce worker, or spend less time on do_futex when there are many lightweight tasks.
Versions (各种版本) OS: Linux 5.4 Compiler: g++ 830 brpc: 0.9.6 protobuf: We use thrift 0.9
Additional context/screenshots (更多上下文/截图)
Update: After hacking some brpc code and restart the machine, we downsample the signal in bthread, and the performance works better.
Update: After hacking some brpc code and restart the machine, we downsample the
signalin bthread, and the performance works better.
what hacks did you make?
Update: After hacking some brpc code and restart the machine, we downsample the
signalin bthread, and the performance works better.what hacks did you make?
We add a gflag FLAG_no_signal_sample, which can add a NOSIGNAL to bthread's flag
Why put bthread_flush() to src/bthread/unstable.h?
Under what conditions NOSIGNAL is he better? Are there any experimental data?
Why put
bthread_flush()to src/bthread/unstable.h?Under what conditions
NOSIGNALis he better? Are there any experimental data?
@renguoqing
I'm not a brpc committer, So I don't know why bthread_flush is unstable.
And our program will call bthread_flush in some conditions, so we think it's safe to using nosignal here. It may make latency grow a little, but it works well in our program.
We change no_signal as a gflag, and we can sampling it, like:
if (FLAG_no_signal_sample != 1 && fastrand() < FLAG_no_signal_sample) {
// mark nosignal
}
FLAG_no_signal_sample's default value is 1. So in most case it will not work. We will adjust it until it take less time on do_futex and still have low latency.
The experimental data depends on workload. We're running so many lightweight task here, so we can downsample it to 1%. If there are lots of CPU boundary task, we think 1 is ok.
@mapleFU This is quite a good practice. Can you contribute a PR? That might help more people.
Why put
bthread_flush()to src/bthread/unstable.h? Under what conditionsNOSIGNALis he better? Are there any experimental data?@renguoqing
I'm not a brpc committer, So I don't know why
bthread_flushis unstable.And our program will call
bthread_flushin some conditions, so we think it's safe to usingnosignalhere. It may make latency grow a little, but it works well in our program.We change
no_signalas a gflag, and we can sampling it, like:if (FLAG_no_signal_sample != 1 && fastrand() < FLAG_no_signal_sample) { // mark nosignal }
FLAG_no_signal_sample's default value is 1. So in most case it will not work. We will adjust it until it take less time ondo_futexand still have low latency.
Thank you. @mapleFU
I also try change SIGNAL to NOSIGNAL in our program a few days ago, but the performance became worse.
Our program traffic is not as high as yours, so it may not be suitable.
@mapleFU This is quite a good practice. Can you contribute a PR? That might help more people.
Glad to contribute to brpc. But the code maybe ugly. I'll try to submit it this weekend
I met the same case. I almost did the same thing as @mapleFU mentioned above. I guess we can do better.
-
If there are already enough bthread worker to saturate CPU, just ignore signal calls. More workers help nothing.
-
Handcraft waiter link list by brpc self instead of futex, although we still need futex to wakeup pthread. The benefit is that we can only wake bthread workers that just have received remote task(i.e from non-bthread-worker thread). No steal() is needed.
-
NUMA-awareness
I have been thinking those improvements for a while and may try to impl those features in the next half year. Any help is appreciated!
@mapleFU This is quite a good practice. Can you contribute a PR? That might help more people.
Glad to contribute to brpc. But the code maybe ugly. I'll try to submit it this weekend
PR link plz?thanks~