deepxde icon indicating copy to clipboard operation
deepxde copied to clipboard

How to implement learning rate anealing?

Open AJAXJR24 opened this issue 1 year ago • 2 comments

Dear @lululxvi Hi and thanks for your help.

  1. I want to know How can i implement the underlying LRA algorithm in deepxde.
  2. How to define the equation number 15?

lulu

AJAXJR24 avatar Dec 11 '23 04:12 AJAXJR24

  • While your question is about loss weights, not learning rate, dynamically updating them in TensorFlow is achievable.

  • The key is to create custom callbacks that adjust the weights during training. This requires a deep understanding of TensorFlow's internal workings, particularly the autograph functionality.

  • If diving into code complexity isn't your preference, consider alternative PINN libraries like sciann or modulus that offer built-in dynamic loss weight functionalities.

  • Or you can refer to (this blog) to learn about constructing a PINN and the adaptive loss weights from the scratch

haison19952013 avatar Dec 12 '23 10:12 haison19952013

No this is not implemented in DeepXDE and to be honest, LRA (learning rate annealing) isn't very effectively on many problems. Anyways it is implemented in NVIDIA Modulus. Personally, I would say that just stick to constant coefficients and use deeper networks. Here is one of my paper where solved PDEs with discontinuous solutions without any adaptive cofficients: paper

praksharma avatar Dec 13 '23 21:12 praksharma