azhurkevich

Results 49 comments of azhurkevich

@yeandy I think this will be it for the first initial TensorRT integration from my side. Moving multi-input models into future integration.

@yeandy I'll look into what you've mentioned tomorrow, will try to address the issues as soon as I figure them out.

@ryanthompson591 My bad, forgot about implementation. Pushed now, thanks!

Do not understand how it is a TensorRT related issue, you won't be able to run /samples/python/detectron2 scripts without detectron2, if you don't want it, replace all detectron2 calls with...

I was just testing it a couple days ago and everything was working for me. Looking at version of libraries you've posted I think this is the culprit here most...

@SEHAIRIKamal So you definitely should not try to convert ONNX on Jetson, this will be a source of a lot of headaches related to libraries compatibility with ARM. We highly...

@SEHAIRIKamal you'll probably have to use [DeepStream](https://developer.nvidia.com/deepstream-sdk) for real time video inference. Not sure how PIL is supposed to play with it though.

@frankvp11 When you export detectron2 model to ONNX you will end up with a graph that has ONNX ops as well as Caffe2 ops which we do not support. As...

@frankvp11 build_engine.py can produce properly calibrated int8 engine that will be much faster, this is especially important for jetson devices. So as soon as you'll get through create_onnx.py, try trtexec....

@frankvp11 seems that your model could've been changed or retrained based on your `--det2_weights` name should be `model_final_f10217.pkl`. Also you are not following instructions on how to properly export the...