mtcnn_facenet_cpp_tensorRT
mtcnn_facenet_cpp_tensorRT copied to clipboard
Inference over Facenet makes different results on Jetson Nano and dGPU
Hi, i'm using this repo from a laptop with TensorRT (it's TensorRT 20.03 docker image), and also from a Jetson, the code works without problems in Jetson, the detections are OK this is in expected values. But when i execute this repo on my own laptop, it return values between 500 and 1000 do you know why is it?
Hi, i'm using this repo from a laptop with TensorRT (it's TensorRT 20.03 docker image), and also from a Jetson, the code works without problems in Jetson, the detections are OK this is in expected values. But when i execute this repo on my own laptop, it return values between 500 and 1000 do you know why is it?
Hi, 500 and 1000 are values of what? kindly eleborate.
@shubham-shahh I think these are the embeddings.
@IsraelLencina I am also trying to reproduce this setup on dgpu, facing saveral issues.
- h5 to pb conversion
Yes, it's embeddings, i've seen that the change is after the inference.
Hi again, i've seen in NOTES section that .uff and .engine are GPU specific, i've tried to generate everything from step 3 (as you say in NOTES section) but the "GPU embeddings" are still in values bigger than "Jetson embeddings".
Hi again, i've seen in NOTES section that .uff and .engine are GPU specific, i've tried to generate everything from step 3 (as you say in NOTES section) but the "GPU embeddings" are still in values bigger than "Jetson embeddings".
are you referring to the master branch or develop branch?
Always master