DCFNet
DCFNet copied to clipboard
the padding of the conv layers
Thanks for good work as usual~ Take type-7 and type-12 network for example. I find that the padding of the conv layers is all 0 when training(input size: 125 output size: 121). But when tracking, the padding is set to 1 (input size: 125 output size: 125) while using the same conv parameters. Can you explain why you do like this? It is a theoretical setting(better in theory) or just a experimental result (better performance in experiments)?