MNN
MNN copied to clipboard
when MNN inference use CPU backend on mobile, the inference speed will decrease when the app in the background
As title, I used the sched_getaffinity method to get the CPU core info and found that current inference thread was always bound to a big core, but the inference speed dropped from 100ms to about 500ms when the app in the background.
Is there any way I can maintain a high speed of inference?
安卓系统调度问题,试试游戏模式或者挂小窗
Marking as stale. No activity in 60 days.