thias15
thias15
Hi guys. I encountered the same issue in the evaluate.py script with Yolov4. It seems the order of the outputs, `pred[0]` and `pred[1]` is random. This leads to a problem...
How many epoch are you running. Which version of tensorflow are you using?
Can you try to replace `val_mean_absolute_error` with `val_mae`. It seems the name was changed. Alternatively, you could try an older version of tensorflow, e.g. 2.6.0.
Any update on this @emo8899 ?
Redo matching in this menu would be great.
@sanyatuning what would actually be great is: (1) Add an option to redo matching for datasets to the dropdown. This can be used if users added data manually. (2) Run...
Yes, this is expected since it is currently using an off-the-shelf object detector. Would you like to contribute to the integration of an object tracker?
This sounds like a great idea. Would you be willing to help build this feature?
Please fix the style.