NeuralPDE.jl
NeuralPDE.jl copied to clipboard
Physics-Informed Neural Networks (PINN) Solvers of (Partial) Differential Equations for Scientific Machine Learning (SciML) accelerated simulation
Implementing the Magnitude Normalization adaptive loss method proposed in this paper "Optimally weighted loss functions for solving PDEs with Neural Networks" [paper](https://arxiv.org/abs/2002.06269) by Remco van der Meer, Cornelis Oosterlee, Anastasia...
Implement the Multiscale Fourier Features (MFF) neural network architecture proposed in this paper "On the eigenvector bias of Fourier feature networks: From regression to solving multi-scale PDEs with physics-informed neural...
I am moving this over here from the [Discourse](https://discourse.julialang.org/t/neuralpde-jl-how-to-use-random-noise-as-the-initial-condition/74723/3) as per Chris's request. A short summary is below. Essentially I'm trying to use random noise as the initial condition of...
like [Schrodinger example](https://github.com/lululxvi/deepxde/blob/master/examples/Schrodinger.ipynb) as follows Fig, ![](https://user-images.githubusercontent.com/45444680/147384943-b56bf6d3-20cf-4a13-9250-b1faf07de462.png)
Say solving the advection equation with periodic boundary condition ```latex u_t+u_x=0, u(0,x)=sin(2πx), u(t,0)=u(t,1) ``` The following code works well ```julia using NeuralPDE, ModelingToolkit, Plots, Flux, DiffEqFlux, GalacticOptim, Optim using ModelingToolkit:...
I'm sure @zoemcc noticed this when doing the experiment manager, but basically what happens is that there's a single `discretization.iteration` value that is then used in the loss functions, and...
Work-in-progress for [#355](https://github.com/SciML/NeuralPDE.jl/issues/355)
To do: 1. fix latex rendering 2. add image
# If dont set CUDA, It's working well. but just let intΘ=CuArray。。。,then the NAN come out. ``` using NeuralPDE, Flux, ModelingToolkit, GalacticOptim, Optim, DiffEqFlux using Quadrature, Cuba, CUDA, QuasiMonteCarlo import...