deep-landmark
deep-landmark copied to clipboard
why the results are not stable?
Without changing other paramers, trainging again can get better (maybe worse) results?
Because of the network initialization, there's no guarantee that the loss will be stable or able to reach a point gives you a low loss. You should consider to restart the training or tuning some parameters if the loss is not what you want.
If I get more data and finetune the models . Maybe I can get better results, because I have get a better network initialization.
more data can help training the network but the network initialization part has nothing to do with your data. Actually, the default weight initialization method is ok for most situation and the loss is likely to converge and become small with training going on.