AnyNet
AnyNet copied to clipboard
Inference on Jetson TX2
Hi, thanks for your great work! What backend did you use to run inference on the device? Have you used PyTorch model as is or converted it to TensorRT?
Same question. Hope for a reply from author
Hello, I am also interested in this. Did you do any tests? I am also interested in migrating the model to tensorflow lite
Same question. I can't get the expected FPS running the original pytorch model on TX2. Am I missing some tricky details when using TX2?