DSS-pytorch
DSS-pytorch copied to clipboard
I cannot reproduce the best result of Dss v2
I just copy your model, loss, optimizer definition, other settings keep the same except the lr was set as 1e-4 and epoch was set as 100. When epoch reaches 100, the training curve seems to converge, but I test the model, it produces MAE as 0.069, MaxFb as 0.880. Do you change any default setting in your training?
I just use 1e-6 as learning rate.
- after 100 epoch, the loss change slow. (You can adjust your learning rate, maybe 1e-4 is too large. For example, use dynamic learning rate)
- You can use log(loss) to amplify you loss curve to see whether it is converged(it's unfriendly in visdom)
@AceCoooool Thank you for your reply. I am training again with 1e-6 and 700 epoch. I have logged 1) the training loss each batch, and I also log 2) the average training and validation loss of each epoch. Besides I plot 3) the MAE and 4) max F-b score just like you did in your code after each epoch. Thanks for your work, visualization code seems easy for me. The converge I said means MAE/F_b/Loss all change very slowly, maybe dynamic learning rate would help, thank you again.
Tanks for for the works. So,did you get better results compared to the proposed by AceCoooool , i find that the reproduced results of the work is lowerer than the paper. I wonder if some things is diffrent, try some data augmentations? increase input size? or else.
I cant reproduce the best results from the paper. Has anyone done it ? Kindly share how...