Han Zhang

Results 19 comments of Han Zhang

用其他的路由器做AP是不是更方便一些?也不用换3A的电源

0.7.5已经增加了这个header了,手动update一下就好了:`pip install -U easyquotation`

Or is this performance boost is a result of pretraining on Imagenet 22k? What is the pretraining dataset for segformer-b5 and convnext-XL? Why is there a big gap between their...

I doubt some of the pretraining dataset may have visual similarities as ADE20K. To reproduce the entire experiment including pretraining on Imagenet-22k requires large dataset download, many training tricks (even...

I notice that for Beit, by training 320k iterations, the model can achieve 56+ mIoU. Training for longer time seems to be a potential cause. But this is not guaranteed,...

> > I notice that for Beit, by training 320k iterations, the model can achieve 56+ mIoU. Training for longer time seems to be a potential cause. But this is...

Why does this happen? I did not modify the scheduler settings. > the learning rate seems not right, it stopped decreasing at 1e-5,and became 0.0 afterwards

I'm afraid this is not the reason. The learning rate becomes zero in the [benchmark](https://download.openmmlab.com/mmsegmentation/v0.5/segformer/segformer_mit-b5_8x1_1024x1024_160k_cityscapes/segformer_mit-b5_8x1_1024x1024_160k_cityscapes_20211206_072934.log.json#:~:text=%7B%22mode%22%3A%20%22train%22%2C%20%22epoch%22%3A%20395%2C%20%22iter%22%3A%20146700,acc_seg%22%3A%2084.08223%2C%20%22loss%22%3A%200.03524%2C%20%22time%22%3A%201.51775%7D) too.

> Hi @fingertap, Thanks for your report, we're working on reproducing the result and will feedback to you soon. However, we all know that the training results cannot be the...

Hi @xiexinch , what a huge gap between my runs and yours! Actually, without --deterministic flag, I got an even worse score. I will attach the log later. Any ideas...