NeuralPDE.jl
NeuralPDE.jl copied to clipboard
Physics informed neural operator ode
Implementation Physics-informed neural operator method for solve parametric Ordinary Differential Equations (ODE) use DeepOnet.
https://github.com/SciML/NeuralPDE.jl/issues/575
Checklist
- [x] pino ode
- [x] family ode by parameter
- [x] physics informed DeepOnet
- [x] tests
- [x] addition loss test
- [x] doc
- [x] multiple parameters
- [x] test with vector outputs and multiple parameters
- [x] imigrate to LuxNeuralOperators
- [x] interpretation output with another mesh
- [x] vector output #871
- [x] update docs
https://arxiv.org/abs/2103.10974 https://arxiv.org/abs/2111.03794
@ChrisRackauckas I need help with packages version. Adding dependency of NeuralOperator.jl to project, fail CI. I've tried a little to line up suitable versions but not success.
I think we should update NeuralOperator.jl dependency versions rather than changing things here.
I think we should update NeuralOperator.jl dependency versions rather than changing things here.
ok, I will try.
It look like it require refactoring code of NeuralOparator and it takes time for do it and I don't know how to quickly solve this problem.
So for a while I just remove dependence of NeuralOperator and figure out with it later.
at the end, one of the way it could be move PINO methods at detached directory with own Project.toml like it's done with NeuralPDELogging.
@ChrisRackauckas @sathvikbhagavan Could you please review the PR, I guess it ready for merge.
It doesn't look like this can do the pure PINO with just the physics loss?
Also, it looks like this needs a big rebase.
It doesn't look like this can do the pure PINO with just the physics loss?
Also, it looks like this needs a big rebase.
it can with just the physics loss but obvious not so good how it with data. I will add test.
ok, what do you think it need to rebase?
This should have had a bit more of an API discussion before starting. The API is really the key here. I think PINO just falls out of doing that API correctly. So let's dive into https://arxiv.org/abs/2103.10974 .
The core element of PINO is the way that the network takes functional inputs. While that's the theory, in practice it usually gets simplified to over some vector space of inputs like in https://arxiv.org/abs/2103.10974. So the point of the PINO is that it should learn over basically the space of u0
and p
.
Thus there's a few things to disconnect here. You could have a non neural operator also take in u0 and p. But it would treat it slightly differently. But the sample space and the neural network are not necessarily the same thing here.
So that leads us down the path to an API. First of all, the PINO
should be like PhysicsInformedNN
except it should make use of information from the bounds metadata https://docs.sciml.ai/ModelingToolkit/stable/basics/Variable_metadata/#Bounds of the parameters. For anything with bounds, it should seek to train a neural network that satisfies all parameter values within the bounds. To keep things simple, it should have a keyword argument bounds
which takes an array of tuples for the bounds of the parameters, and pre-populate it to match the bounds from the metadata. Anything without bounds would be treated as a constant.
Initial conditions can simply be set to be functions of parameters, so only parameters need to be supported and a tutorial can handle the rest.
So PINO
would take the same information as PhysicsInformedNN
except that it would also require parameter bounds. Then its physical loss would sample over the parameter space as well. That would need possibly its own strategy, but random and quasi-random would do for starters.
For solution data, PhysicsInformedNN
and PINO
should just have a nice way of supporting that, that's just a completely separate feature.
What would be required though is a slightly different implementation of the NN. We should then require NN(indepvars,p)
where the first part is the independent variables [t,x,y,...]
. Thus this is a bit of a difference from the NN form from before. But this is required for example for things like the DeepONets which treat the two in separate neural networks and then merge. The output is a solution for those parameters.
So given this is the PINO... I don't understand how this PR implements an API like this at all.
pino_phase = EquationSolving(dt, pino_solution)
chain = Lux.Chain(Lux.Dense(2, 16, Lux.σ),
Lux.Dense(16, 16, Lux.σ),
Lux.Dense(16, 32, Lux.σ),
Lux.Dense(32, 1))
alg = PINOODE(chain, opt, pino_phase)
fine_tune_solution = solve(
prob, alg, verbose = false, maxiters = 2000)
this doesn't have the right arguments, the space over which things are trained, a neural operator compliant NN? I don't understand how any of this is the PINO.
So let's take a step back. Before doing it for the PDEs, let's get the ODE form. PINOODE
needs a chain of the form NN([t],p)
and, because ODEProblems don't have it, some representation of the bounds over which to sample p
. It needs to then learn by sampling both t
andp
. Now the weird thing is representing the solution here, since it's not quite an ODESolution. What I would recommend is giving it the ODE solution at the p
specified by the prob
, but then document that sol.original
gives the neural network weights for the NN(t,p)
object, and show how it can be used to sample at new points.
@KirillZubov @sathvikbhagavan are we in agreement on the API here?
are we in agreement on the API here?
yes, makes sense.
are we in agreement on the API here?
@ChrisRackauckas thank for you comment. I oriented on this article https://arxiv.org/abs/2111.03794 while implementing 'pinoode'. Here it is from appeared 'fine_tune_solution' and other features which remained unclear to you.
I agree with your comment. The mapping between the functional space by parameters and the solution is not explicitly shown in the interface as arguments. Something like 'mapping = [u0, u(t)]'. Instead of this, space of parameters are implicitly generated as datasets and put in 'TRAINSET', which is not a better API solution - agree.
Also, in a more general form, it can be not limited only to 'u0' and 'p' giving access to parameterize any argument or even a function as part of an equation.
I’ll think about it and provide in this PR what an API for PINOODE might look like according to your comments.
After agreeing with you about API, I will begin to upgrade the code for PINOODE following the new requirements.
To discuss API for PINO PDE, I'll create an added issue and provide my version of the prototype for API there too but later.
@ChrisRackauckas Considering your comments above, I tried to make a new API for PINOODE. Could you please check, that is what you described and waiting for?
#API
#only physics
equation = (u, p, t) -> cos(p * t)
tspan = (0.0f0, 2.0f0)
u0 = 0.0f0
prob = ODEProblem(equation, u0, tspan)
# prob = PINOODEProblem(equation, tspan)?
# init neural operator
deeponet = DeepONet(branch, trunk)
bounds = (p = [0.1, pi / 2], u0 = [1, 2])
opt = OptimizationOptimisers.Adam(0.1)
alg = NeuralPDE.PINOODE(deeponet, opt, bounds)
sol = solve(prob, alg, dt = 0.1, verbose = true, maxiters = 2000)
# with data
equation = (u, p, t) -> cos(p * t)
tspan = (0.0f0, 2.0f0)
u0 = 0.0f0
prob = ODEProblem(linear, u0 ?, tspan)
# prob = PINOODEProblem(equation, tspan)?
# init neural operator
deeponet = DeepONet(branch, trunk)
opt = OptimizationOptimisers.Adam(0.01)
bounds = (p = [0, pi / 2])
function data_loss()
#code
end
alg = NeuralPDE.PINOODE(chain, opt, bounds; add_loss = data_loss)
sol = solve(prob, alg, verbose = false, maxiters = 2000)
Implementation Physics-informed neural operator method for solve parametric Ordinary Differential Equations (ODE) use DeepOnet.
@KirillZubov Can I suggest a PINO-based spatiotemporal MFG example and test https://www.mdpi.com/2227-7390/12/6/803
@ChrisRackauckas Also the excellent Sophon.jl https://yichengdwu.github.io/Sophon.jl/dev/ @YichengDWu for interation with MTK and other SciML packages
I don't think it's quite there yet. Some comments
https://github.com/LuxDL/LuxNeuralOperators.jl/issues/7 https://github.com/LuxDL/LuxNeuralOperators.jl/issues/9
hey @sathvikbhagavan @ChrisRackauckas I fixed all that was noted from the last review, except for the vector output, this is a issue that is currently in progress https://github.com/LuxDL/LuxNeuralOperators.jl/issues/9. I will do it later in a separate PR.
A few comments mostly cleanups/clarifications. Can you make CI happy as well?
It is because I began implement vector output and now PR in WIP again. Yep, sure, I will make CI to not fail condition at the end.
A few comments mostly cleanups/clarifications. Can you make CI happy as well?
@sathvikbhagavan Can we add not registered package on CI?
ERROR: LoadError: expected package `LuxNeuralOperators [c0ba2cc5]` to be registered
ERROR: LoadError: expected package
LuxNeuralOperators [c0ba2cc5]
to be registered
@avik-pal, can you register LuxNeuralOperators? cc: @ChrisRackauckas
I did all from my side for now with this PR. LuxNeuralOperators
is unregister yet. Multi output is work in progress https://github.com/LuxDL/LuxNeuralOperators.jl/pull/11#issuecomment-2210419738 . So now I move to PDE PINO issue https://github.com/SciML/NeuralPDE.jl/pull/862 and come back to PINO ODE later.
I will review this today.
I have suggested some cleanups/changes. There are some comments from last review not addressed. Can you address them? Can you also run through formatter as well? I think a couple more iterations, it should be close to merging.
Which comments from last review not addressed? LuxNeuralOperator still not registered and DeepONet multi output
task is in work in progress so I can't address to them before they will have been not done. Now I'm working on PDE PINO and support FourierNeuralOperator.