GCL icon indicating copy to clipboard operation
GCL copied to clipboard

How did you warm up identity encoder??

Open Mike-YJW opened this issue 4 years ago • 3 comments

Thanks you for providing your great work.

I read your code and there seems to be no code stage=1, warming up identity encoder.

It saids in the paper that

"We firstly use a state-of-the-art unsupervised Re - ID method to warm up Eid, which is then
considered as a baseline in our ablation studies" which you used JTCV? method to warm up identity encoder.
"

I figured out when input image x is fed into identity-encoder(Eid), it outputs feature vector fid(2048x4x1 feature after part average pooling in ft_net in the code). But how did you train the encoder(Eid)?? Did you use just apply your ft_net in the JTCV, or MLC code and train it??

For example, below is the framework of MLC paper. image

In the above image, the CNN layer is your ft_net and you trained the whole network so the weights in the CNN(ft_net) could be warmed up?

image

Is this right??

Mike-YJW avatar Sep 09 '21 08:09 Mike-YJW

Our encoder has two branches after CNN layers: one branch outputs 2048x4x1 for generative part with part average pooling, the other one outputs 2048x1x1 for ReID training with normal average pooling. The second one is used for warm up.

We just run the source code (JVTC or MLC) and saved their weights as Stage 1. In stage 2, we load the saved weights into our ft_net.

chenhao2345 avatar Sep 09 '21 13:09 chenhao2345

Thank you.

One more question. Do you have the code for getting the Fid and SSIM scores in your paper?? image

It will be great if you can provide this also for checking Generator's ability.

Mike-YJW avatar Sep 10 '21 07:09 Mike-YJW

To get FID and SSIM, first generate images with our examples/generate_data.py. Then, use following project code

FID: https://github.com/layumi/TTUR SSIM: https://github.com/layumi/PerceptualSimilarity

chenhao2345 avatar Sep 13 '21 08:09 chenhao2345