yangmin09
yangmin09
Have you solved? I met the same problem.
@SmartMachineBay I got the same loss value with you when i fine-tune the network on my own data set. Do you have some idea?
Yes, we trained 100 epochs. In fact, I have tried the tf implementation, it dit not work well and the batch size could only be set to a small value...
@zjcs thank you for your attention 1. I directly use the batch size 256 and lr 0.1 in my training 2 I haven't tried 25 epochs 3 When i use...
@andrefaraujo Thank you for your remind. I have been already aware of this problem and i have tested the model by cropping the query of DOLG. Results differ just a...
>" i have tested the model by cropping the query of DOLG. Results differ just a little when testing on Roxf and Rpar without 1M and can achieve the performance...
@sungonce @andrefaraujo Sorry for the misreported performances due to negligence of the trainee in evaluation process. I have modified part of the final results in this repo and 1M results...
> Hi. Thanks for posting good code. @feymanpriv > > I am aware of the above issue. I am writing paper. I'm trying to cite your paper. > > I...
> I find that a paper named "[Deep Fusion of Multi-attentive Local and Global Features with Higher Efficiency for Image Retrieval](https://openreview.net/forum?id=OqlohL9sVO&referrer=%5Bthe%20profile%20of%20Baorong%20Shi%5D(%2Fprofile%3Fid%3D~Baorong_Shi1))" was recently submitted to ICLR2022, and the first author...
> Hi @feymanpriv, I conduct experiment with the [R101-DOLG pytorch model](https://pan.baidu.com/s/1_osEyC53txA-_1cbadBzLQ) you uploaded and find huge differences between the performance of each dataset and those you reported, especially on the...