Hooks
Hooks
@dereyly I met the same problem. I see your train log. The loss becomes very large at 500 iters and finally the loss_rpn_cls is still about 0.3. It means the...
@haihuanzxx I add them in my project and we can talk abou it.
@qiuhaining 你好,回复晚了。这个代码是在自己项目中结合了文章中的一些方案,但只是一个参考,并不是完全按照论文原样实现的,baseline用的不一样,部分细节(比如负样本的box回归至中心)也没用上,并没有在cityperson上调过,很多参数都需要特殊设置。如果有什么值得探讨的问题可以私下联系我,也欢迎pr。
Sorry I haven't test on that dataset. Maybe some parameters should be modified, especially the attention scale that is not explained clearly in the paper.
@luuuyi The anchor assign and data augmentation is not the same as the paper. I will update them when I have time.
If images in your dataset do not have the same size, the memory needed is always changing. You can reduce the input size in dataloader.py or use smaller batchsize.
In citypersons, it is correct.
I do not test on Caltech. I wonder whether your label file is correct. Or you need to try different hyper-parameters.
@sigal-raab I just run the provided script. `python -m train.train_mdm --save_dir save/my_humanml_trans_enc_512 --dataset humanml` The training log 
@sigal-raab - Ubuntu 18.04 - cuda 10.1 - Pytorch 1.7.1 The environment: name: mdm channels: - pytorch - conda-forge - defaults dependencies: - _libgcc_mutex=0.1=main - _openmp_mutex=5.1=1_gnu - beautifulsoup4=4.11.1=pyha770c72_0 - blas=1.0=mkl...