deepxde
deepxde copied to clipboard
How to implement learning rate anealing?
Dear @lululxvi Hi and thanks for your help.
- I want to know How can i implement the underlying LRA algorithm in deepxde.
- How to define the equation number 15?
-
While your question is about loss weights, not learning rate, dynamically updating them in TensorFlow is achievable.
-
The key is to create custom callbacks that adjust the weights during training. This requires a deep understanding of TensorFlow's internal workings, particularly the
autograph
functionality. -
If diving into code complexity isn't your preference, consider alternative PINN libraries like
sciann
ormodulus
that offer built-in dynamic loss weight functionalities. -
Or you can refer to (this blog) to learn about constructing a PINN and the adaptive loss weights from the scratch
No this is not implemented in DeepXDE and to be honest, LRA (learning rate annealing) isn't very effectively on many problems. Anyways it is implemented in NVIDIA Modulus. Personally, I would say that just stick to constant coefficients and use deeper networks. Here is one of my paper where solved PDEs with discontinuous solutions without any adaptive cofficients: paper