deepxde
deepxde copied to clipboard
Why L-BFGS performs different in tensorflow.compat.v1 and pytorch
Hi,
I tried to train the model with L-BFGS after 15000 iterations of Adam, but I got different results from tensorflow.compat.v1 and PyTorch, even if I use the exact same code, just a different backend.
This is the loss history using tensorflow.compat.v1
This is the loss history using Pytorch
Does anyone have any idea why this happens? Does this mean we have to use TensorFlow if we want to use the L-BFGS optimizer?
Thanks a lot
Hello, try to use the same seed for both cases dde.config.set_random_seed(1)
. Maybe the results are different due to different initialization
Thanks for your advice, but I tried several times and got similar results. I don't think it is caused by randomness of initialization.
Because LBFGS is implemented differently in both libraries. You find the link the the particular implementation (papers) in the source code of tf and torch.