Adrià
Adrià
Hmm, can you run the inference on the network twice in a row and measure only the second time? Maybe your measurements include the allocation on the GPU.
Then I don't know what's going on. For reference, I timed the inference of one image using the dlib C++ example [dnn_face_recognition_ex](http://dlib.net/dnn_face_recognition_ex.cpp.html) on a 12th Gen Intel® Core™ i7-1260P ×...
Another thing, are you certain it's that model that's causing the latency? Not the face detector? How big are your images?
Ah, that makes more sense. However, that image doesn't seem that big, though. That model is creating an image pyramid which will have roughly 4 times the number of pixels...
Yeah, I don't know what's going on. That model should be really fast, even on relatively large images, since it only has 7 convolutional layers... There must be something else...
Can you just run the face detector model using the official dlib examples: - C++: [dnn_mmod_face_detection_ex](http://dlib.net/dnn_mmod_face_detection_ex.cpp.html) - Python: [cnn_face_detector.py](http://dlib.net/cnn_face_detector.py.html) Maybe that other library you're using does something we are not...
Ok, so I ran the C++ example on a NVIDIA Quadro RTX 5000 and, with the default example, the second inference on each image (to avoid measuring the memory allocation)...
Assuming you are at the top-level directory of the dlib repository: ```sh cd examples cmake -B build -G Ninja cmake --build build -t dnn_mmod_face_detection_ex ``` If you don't specify `-G...
I've just modified the font with fontforge to include those two characters (copied them from `U+3137` and `U+314F`) but still doesn't work.
Hi, thank you for your reply. I tried with both OTF and TTF and still get the same problem. I will repost if I make any progress on this. Thank...