Adam Van Etten
Adam Van Etten
As noted by @ghghgh777, this seems to be a gpu architecture issue, and has been observed in YOLO as well: https://github.com/pjreddie/darknet/issues/486. I'm still digging into the issue, but it seems...
Glad you got it working. The confidences have always been relatively low for the YOLT models, though increasing training time will increase confidence levels.
I'm not sure what's causing the confidences to be so low. You could try retraining with the recently updated v2 and see if the problem persists.
Have you tried running Line 21 of the dockerfile when in the docker image? # once started run: export PYTHONPATH=$PYTHONPATH:/tensorflow/models/research/:/tensorflow/models/research/slim
What's in the directory? --train_model_path /simrdwn/results/train_ssd_inception_v2_cowc_mat_2019_07_11_18-16-18
The most recent push should solve this problem.
The parse_cowc.py script parses out large images into smaller training windows. This is easily adaptable to xView in combination with the scripts in the xView repository. I'll try to put...
It's hard to determine what might be happening without further details. What test data are you using, and what parameters are used for the test command?
Can you print out the output of the test script? The issue in pd.read_csv() is usually caused because there was an error further upstream and the inference script did not...
This is likely an issue with YOLO and updates to architectures (see https://github.com/avanetten/simrdwn/issues/31#issuecomment-501925778). You may need to experiment with the yolt2 [Makefile](https://github.com/avanetten/simrdwn/blob/master/yolt2/Makefile).