GaitPart
GaitPart copied to clipboard
Some silent hyperparameters in GaitPart have been mentioned here!
Hi, appreciation on your great work! I have read it through, and I have two questions in the followings.
-
In this paper you mentioned that HP module horizontally splits the feature map into n parts, i already read this paper through, however i do not konw the exact value of n, could u help me?
-
In this paper you say you use Adam optimizer and the momentum value is 0.9, but i couldn't find adam optimizer with momentum in pytorch tutorial, could you help me on that?
Anyway, thank you very much! Waiting for your reply!
Thanks for your attention.
- n=16 in GaitPart.
- torch.optim.Adam(..., betas=(0.9, 0.99)) in default. I hope this would be helpful to you.
@ChaoFan96 thanks for your reply! that really helps me! I have other questions now:
- what' s the exact value of s in convnet1d
- what's the exact architecture of convnet1d, I guess it is like: conv1d-relu-conv1d-sigmoid , do I do something wrong?
- at the end of the net you use FC layer to transform features to another space, does it transform feature from 128 to 256 like gaitset does? and do you add nonlinear function on it?
Yeah, there are little exact value of the hyperparameters being omitted in GaitPart because of my carelessness. I'm sorry for your troubles as well as thank for your circumspection. Following respond would help you:
- 4
- No, you're right
- just linear mapping without any nonlinear activation
And more, there is a clerical error in Sec4.1->Training Details->3)In OU-MVLP, the value of p in each block have been set to 2, 2, 8, 8 but not 1, 1, 3, 3 (you know, 2=2^1, 8=2^3) in real practice. If you find other silent hyperparameters in GaitPart, feel free to contact me, thank you so much!
@ChaoFan96 thank you very much! You really help me a lot! If I have other questions, I will contact you. Best wishes!
Yeah, there are little exact value of the hyperparameters being omitted in GaitPart because of my carelessness. I'm sorry for your troubles as well as thank for your circumspection. Following respond would help you:
- 4
- No, you're right
- just linear mapping without any nonlinear activation
And more, there is a clerical error in Sec4.1->Training Details->3)In OU-MVLP, the value of p in each block have been set to 2, 2, 8, 8 but not 1, 1, 3, 3 (you know, 2=2^1, 8=2^3) in real practice. If you find other silent hyperparameters in GaitPart, feel free to contact me, thank you so much!
I have one question: You said "due to it contains almost 20 times more sequences than CASIA-B, an additional block composed of two FConv Layers is stacked into the FPFE (the output channel is set to 256)", so this additional block if it is followed by maxpooling or the third block is followed by maxpool and the last block doesnt followed maxpool.I prefert to the latter way. How about you? Thank you very much
@barbecacov Thanks for your attention! For the OU-MVLP database, both the block3 and block4 are not equipped with maxpooling layer. Just the block1 & block2 are followed by maxpooling layer. Hope this respond would help you.
Thanks for your attention.
- n=16 in GaitPart.
- torch.optim.Adam(..., betas=(0.9, 0.99)) in default. I hope this would be helpful to you.
Hello, I checked that the default parameter for beta in the Adam optimizer is (0.9, 0.999).Did you change it to beta=(0.9, 0.99) during training?
Hello, I checked that the default parameter for beta in the Adam optimizer is (0.9, 0.999).Did you change it to beta=(0.9, 0.99) during training?
Hello, I checked that the default parameter for beta in the Adam optimizer is (0.9, 0.999).Did you change it to beta=(0.9, 0.99) or beta=(0.9, 0.9) during training?
@logic03 Hello, thanks for your attention and correction. In the real practice, I'm using the default parameter for beta in the Adam.
Hi, the OpenGait is released now! ( https://github.com/ShiqiYu/OpenGait ) This project not only contains the full code of gaitpart but also reproduces several SOTA models of gait recognition. Enjoy it and any questions or suggestions are welcome!