Input size and experiment parameters
Hello,
I'm trying to re-use your model either on the SporsMOT dataset or my own dataset but in each case, I don't obtain any result using your tracking experiment :
''' python3 tools/track_mixsort.py -expn SportsMOT -f exps/example/mot/yolox_x_sportsmot.py -c pretrained/yolox_x_sports_train.pth.tar -b 1 -d 1 --config track '''
I tried to understand where the issue was detected, and I found it during the model inference.
My guess would be on the input size and the parameters of the experiment.
Either the SportsMOT or my own dataset contains images with (720, 1280, 3) shapes. Therefore I changed the experiment parameters with the following input and test size :
However, I obtain errors during the model inference :
The script quits after the previous screen.
Maybe I forgot a preprocessing step to have the right size of the input (I also tried with the raw parameters of the GitHub, i.e. self.test_size = (800, 1440), wasn't working too)
Hi @arthur-andre ,
It seems like the program quits without any error output, so I recommend you to add some breakpoints and run the program step by step to see where it fails to continue running.
Hi,
The program quits exactly at the concatenation step during the forward (torch.cat).
As it wants to concatenate torch.Size([1, 640, 46, 80]) and torch.Size([1, 640, 45, 80]) (i.e. the size in the output prints above)
Maybe it should have been the same, that's why I asked before for the input size.
Hello @arthur-andre ,
I think this is the problem within the yolox. I directly import YOLOPAFPN module from yolox.models, and this error would occur regardless of the shape of the input tensor.
I recommend you to check the version of pytorch, or seek help at https://github.com/Megvii-BaseDetection/YOLOX