enazoe

Results 84 comments of enazoe

@xNeorem try delete the line **-gencode arch** at cmakelists or set it as _compute_53,code=sm_53_ on nano. and check this issue #13

does not support trt 6.x now

batch inference is supported now ,the max batch size is read from cfg file now. I will fix the readme sry

which version of your jetpack and tensorrt?

make sure have enough hard disk space when generate the engin file. And upgrade to jetpack4.4 ?I am not test it on jetpack4,3.

@yutao007 @jch-wang 是的就是内存问题

> 请问这个yolo-tenssort可以部署在ubuntu上吗,是x86的机器+nvidia GPU,不是jetson系列的板子 可以

"Platform doesn't support this prectsion."

how many images of inference process?

ok, you should set the batch size by [this](https://github.com/enazoe/yolo-tensorrt/blob/master/modules/class_detector.h#L51),and the default value is 4 , you could set it by your own display memory.