Wang Xin
Wang Xin
@aspaul20 you may need install pre-commit and run `pre-commit run --all-files`
@aspaul20, This looks great! I'll take some time to review the code further. In the meantime, could you add some documentation to help users understand how to use `slice` operation?
And you may need to fix the Contributor License Agreement (CLA) check.
可以提供一个最小可复现demo吗
@tisoz 什么样的服务器环境呀,另外可以提供一个最小可复现demo吗
@tisoz,感谢提供代码和描述。
用线程池会不会好一点
设置 `FLAGS_allocator_strategy=naive_best_fit` 会不会有缓解 https://github.com/PaddlePaddle/PaddleOCR/blob/e73eb76271b441be9bd7981789417696f3f27ae0/tools/infer/predict_system.py#L22 https://www.paddlepaddle.org.cn/documentation/docs/zh/guides/flags/memory_cn.html
我试了一下,把这里修改成下面的,内存应该不会一直增长。 https://github.com/PaddlePaddle/PaddleOCR/blob/e73eb76271b441be9bd7981789417696f3f27ae0/tools/infer/predict_system.py#L22 ```Python os.environ["FLAGS_allocator_strategy"] = "naive_best_fit" os.environ["FLAGS_eager_delete_tensor_gb"] = "0.0" os.environ["FLAGS_memory_fraction_of_eager_deletion"] = "1.0" ``` log: [run.log](https://github.com/PaddlePaddle/PaddleOCR/files/15405193/run.log) ``` import loguru import psutil from fastapi import FastAPI, Request from paddleocr import PaddleOCR from...
另外换一个推理backend,也可以避免这个问题,如onnxruntime、openvino