Yet-Another-EfficientDet-Pytorch
Yet-Another-EfficientDet-Pytorch copied to clipboard
problem with efficientdet_test.py?
I run efficientdet_test.py with d0, but the result is different with your in test folder.,there are a lot of missed objects. And the fps is only 18 with Tesla V100.
set force_input_size=1920
. Official repo uses original image size while this repo uses default network input size. If you try to compare these two repos, you must make sure the input size is consistent.
And no idea why V100 is so slow. Please provide the minimal code to replicate the speed.
I can confirm the following speeds using a V100, using the provided efficientdet_test.py script in the repository, with some different configurations
FPS | Model | Input size |
---|---|---|
21.3 | d0 | default |
17.1 | d0 | 1920 |
4.9 | d7 | default |
3.3 | d7 | 1920 |
This has produced the following outputs
D0 using force_input_size=1920
D7 using force_input_size=1920
The smaller d0 network seems more inclined to predicting people over motorcycles, whereas the larger d7 model seems to favor motorcycles. D0 also contains more false positives than the larger d7 model, but has more objects overall as well. The d0 image seems to match that which is provided in the repository.
Still no idea why V100 is so slow. Maybe it's torch version? Mine is torch1.4.
Testing on 1920 is a mistake. But since official author did it. I did it too. You should test the networks at their default resolution. That's only fair.
Still no idea why V100 is so slow. Maybe it's torch version? Mine is torch1.4.
Testing on 1920 is a mistake. But since official author did it. I did it too. You should test the networks at their default resolution. That's only fair.
I use RTX 3080, force_input_size=False, using the the provided efficientdet_test.py script in the repository , the speed is 15 fps.(63ms). The most time is spend in the backbonenet. I don't know how to get you offer the speed.