NeuralPDE.jl
NeuralPDE.jl copied to clipboard
Added i) Inverse Dirichlet Adaptive Loss and ii) Neural Tangent Kernel Loss Implementations
Instead of using the Jacobians for K (as implemented) by the authors; I have used dot products because we were only concerned with the main diagonal entries of the K matrix for our implementation - in Algorithm 1 (Pg. 10)
Put Plots and Revise in to your v1.8 environment, and add using Plots, Revise to your juliarc
Note that for this there is starter code in https://github.com/SciML/NeuralPDE.jl/pull/504 and https://github.com/SciML/NeuralPDE.jl/pull/506. It doesn't make sense to do both in the same PR though.
@xtalax @ChrisRackauckas I have made some changes according to your suggestions.
- Two independent commits have been made that deal with the different loss functions
- The Inverse Dirichlet is performing well, but the loss in Neural Tangent kernel is high
Pls let me know of any improvements that can be made in the implementations. Also any direction on how to improve the testing part would be very helpful !
Rebase onto master.
The next step would be to setup tutorials and benchmarks with these.
Looks like tests failed.