SG-One
SG-One copied to clipboard
Batch size affects the behaviour of Masked Average Pooling
In onemodel_sg-one.py line 37, the behaviour of torch.sum(pos_mask)
will be different if the batch size is not 1.
It is better to write it batchsize-agnostic.
@kaixin96 , have you made the code executed and reproduced the reported results? Thanks.
@kailigo I managed to run the code but didn't reach the reported performance (for the first split, only 0.35). Maybe I still miss something. The proposed architecture is not very complex if you examine carefully the model code. But my own re-implentation didn't achieve the reported performance either.
@kaixin96 , I managed to run the code as well. Same as you, I can only reach 0.35 for split 1. For other splits, the gaps between the reproducing ones and the reported ones are even larger. We both definitively missed something. It would be much appreciated if the author (@xiaomengyc ) can provide some guidance for reproducing the results.
I will read the code in the following days, and will post my results here if I get some improvements. Could you share your experience here, if you reach or approach the reported results. Thanks.
@kailigo Thanks. It will be helpful.
I am afraid I don't have time looking into this right now but I will try it again later and share my experience if there is any improvement.
I used the test_frame_all.py to test,and I got mIOU of 0.46.Does this file mean the one-shot result?
@shenggedeqiang i think that is the result. Can i ask how you train the model, i run the code for group 0 but got about 0.20 for mIOU?
@shenggedeqiang i think that is the result. Can i ask how you train the model, i run the code for group 0 but got about 0.20 for mIOU? In the folder of scripts,you need to run the four .sh files(if you only have one gpu,you need to change the cpu ID to 0),and then run test_frame_all.py