ERNIE
ERNIE copied to clipboard
ERNIE\applications\tasks\text_matching/run_infer.py:文本相似度比对问题(batch_size=8,max_seq_len=512)
trafficstars
predictor.run()刚开始运行执行时间短,越往后批次时间越长,每次比对8组文本对(batch_size=8),为啥越往后predictor.run()运行越慢,占用的内存没释放吗?导致后面越来越慢,这怎么解决?内存使用率99%,cpu使用率16%,卡死了比蜗牛还慢,每次predictor.run()(执行时间72s,380s,520s......执行时间越来越长) 调小batch_size(8=>4),内存使用率71%,cpu25%max_seq_len?predictor.run()(每次执行时间11s~12s基本固定),界面等其他操作正常不卡死 解决文本比对慢问题,调整batch_size,max_seq_len等参数能提高速度吗?还是需要修改ERNIE\applications\tasks\text_matching/inference/custom_inference.py代码?
This issue has been automatically marked as stale because it has not had recent activity. It will be closed if no further activity occurs. Feel free to reopen it. Thank you for your contributions.