GaitPart icon indicating copy to clipboard operation
GaitPart copied to clipboard

Some silent hyperparameters in GaitPart have been mentioned here!

Open OliverHxh opened this issue 4 years ago • 10 comments

Hi, appreciation on your great work! I have read it through, and I have two questions in the followings.

  1. In this paper you mentioned that HP module horizontally splits the feature map into n parts, i already read this paper through, however i do not konw the exact value of n, could u help me?

  2. In this paper you say you use Adam optimizer and the momentum value is 0.9, but i couldn't find adam optimizer with momentum in pytorch tutorial, could you help me on that?

Anyway, thank you very much! Waiting for your reply!

OliverHxh avatar Jul 12 '20 06:07 OliverHxh

Thanks for your attention.

  1. n=16 in GaitPart.
  2. torch.optim.Adam(..., betas=(0.9, 0.99)) in default. I hope this would be helpful to you.

ChaoFan96 avatar Jul 14 '20 13:07 ChaoFan96

@ChaoFan96 thanks for your reply! that really helps me! I have other questions now:

  1. what' s the exact value of s in convnet1d
  2. what's the exact architecture of convnet1d, I guess it is like: conv1d-relu-conv1d-sigmoid , do I do something wrong?
  3. at the end of the net you use FC layer to transform features to another space, does it transform feature from 128 to 256 like gaitset does? and do you add nonlinear function on it?

OliverHxh avatar Jul 14 '20 14:07 OliverHxh

Yeah, there are little exact value of the hyperparameters being omitted in GaitPart because of my carelessness. I'm sorry for your troubles as well as thank for your circumspection. Following respond would help you:

  1. 4
  2. No, you're right
  3. just linear mapping without any nonlinear activation

And more, there is a clerical error in Sec4.1->Training Details->3)In OU-MVLP, the value of p in each block have been set to 2, 2, 8, 8 but not 1, 1, 3, 3 (you know, 2=2^1, 8=2^3) in real practice. If you find other silent hyperparameters in GaitPart, feel free to contact me, thank you so much!

ChaoFan96 avatar Jul 15 '20 09:07 ChaoFan96

@ChaoFan96 thank you very much! You really help me a lot! If I have other questions, I will contact you. Best wishes!

OliverHxh avatar Jul 15 '20 10:07 OliverHxh

Yeah, there are little exact value of the hyperparameters being omitted in GaitPart because of my carelessness. I'm sorry for your troubles as well as thank for your circumspection. Following respond would help you:

  1. 4
  2. No, you're right
  3. just linear mapping without any nonlinear activation

And more, there is a clerical error in Sec4.1->Training Details->3)In OU-MVLP, the value of p in each block have been set to 2, 2, 8, 8 but not 1, 1, 3, 3 (you know, 2=2^1, 8=2^3) in real practice. If you find other silent hyperparameters in GaitPart, feel free to contact me, thank you so much!

I have one question: You said "due to it contains almost 20 times more sequences than CASIA-B, an additional block composed of two FConv Layers is stacked into the FPFE (the output channel is set to 256)", so this additional block if it is followed by maxpooling or the third block is followed by maxpool and the last block doesnt followed maxpool.I prefert to the latter way. How about you? Thank you very much

barbecacov avatar Sep 01 '20 08:09 barbecacov

@barbecacov Thanks for your attention! For the OU-MVLP database, both the block3 and block4 are not equipped with maxpooling layer. Just the block1 & block2 are followed by maxpooling layer. Hope this respond would help you.

ChaoFan96 avatar Sep 01 '20 08:09 ChaoFan96

Thanks for your attention.

  1. n=16 in GaitPart.
  2. torch.optim.Adam(..., betas=(0.9, 0.99)) in default. I hope this would be helpful to you.

Hello, I checked that the default parameter for beta in the Adam optimizer is (0.9, 0.999).Did you change it to beta=(0.9, 0.99) during training?

logic03 avatar Sep 20 '20 11:09 logic03

Hello, I checked that the default parameter for beta in the Adam optimizer is (0.9, 0.999).Did you change it to beta=(0.9, 0.99) during training?

Hello, I checked that the default parameter for beta in the Adam optimizer is (0.9, 0.999).Did you change it to beta=(0.9, 0.99) or beta=(0.9, 0.9) during training?

logic03 avatar Sep 20 '20 11:09 logic03

@logic03 Hello, thanks for your attention and correction. In the real practice, I'm using the default parameter for beta in the Adam.

ChaoFan96 avatar Sep 21 '20 02:09 ChaoFan96

Hi, the OpenGait is released now! ( https://github.com/ShiqiYu/OpenGait ) This project not only contains the full code of gaitpart but also reproduces several SOTA models of gait recognition. Enjoy it and any questions or suggestions are welcome!

ChaoFan96 avatar Oct 19 '21 02:10 ChaoFan96