NeuralPDE.jl
NeuralPDE.jl copied to clipboard
Alternative to ModelingToolkit for PDE expressions
Hi,
Is there an alternative way to describe the PDE to solve that doesn't require the use of ModelingToolkit or Symbolics? For example, if I were to need to write:
Differential(x)(g(x)*f(x)) ~ 0.0
where f(x)
cannot be represented symbolically (https://symbolics.juliasymbolics.org/dev/manual/faq/).
I am aware of Symbolics's @register
macro, but I'm afraid it would be too cumbersome to obtain the analytical expressions of the derivatives and register them all for use within this framework.
My function f(x)
is high-dimensional and involves a lot of gridded interpolations obtained from Interpolations.jl. These interpolation tables are not supported by Symbolics, I get the dreaded ERROR: TypeError: non-boolean (Num) used in boolean context
error.
However, f(x)
is ForwardDiff-compatible, so I can evaluate my manually-written PDE at any set of points I choose. So my question is: can I set up NeuralPDE to solve for my equation containing a non-quasi-static function without using Symbolics/MTK?
but I'm afraid it would be too cumbersome to obtain the analytical expressions of the derivatives and register them all for use within this framework.
You don't need to. It doesn't use symbolic derivatives, it just needs to be able to AD through f
.
My function f(x) is high-dimensional and involves a lot of gridded interpolations obtained from Interpolations.jl. These interpolation tables are not supported by Symbolics, I get the dreaded ERROR: TypeError: non-boolean (Num) used in boolean context error.
That's exactly a case that works with @register
.
Thanks for the quick response!
So I am able to register the functions inside f(x)
that are not symbolically representable, but those inner functions require derivatives too. Assume that in the following function, the inner functions g_i(x)
are non-quasi-static, and therefore need @register.
function f(x)
return [g_1(x), g_2(x), g_3(x), .., g_n(x)]
end
and I registered all of the inner g_i
functions so that f(x)
doesn't return the non-boolean
error.
Instead, the symbolic expression for f(x)
retains the g_i
function calls.
This allows me to obtain the pde. However, inside NeuralPDE, when the package attempts to build the pde loss function, it encounters an error:
ERROR: LoadError: KeyError: key g_1 not found
Doesn't this mean that I have to register the derivatives of all theg_i's
manually as well? This is what I discerned from this page.
How do I use AD here then?
What is the error that is thrown? Please post the whole error.
Okay, a portion of my f(x)
looks like this:
f([xV, xα]) = xV + 0.0031009487651181677normV(xV)*cos(xα)
normV(xV) has been @register
-ed.
Simplifying my problem for the sake of explanation, the pde is: Differential(xV)(f([xV, xα])) ~ 0.0f0
From NeuralPDE, the error is now:
ERROR: LoadError: KeyError: key normV not found
Stacktrace:
[1] getindex(h::Dict{Symbol, Int64}, key::Function)
@ Base ./dict.jl:482
[2] _transform_expression(ex::Expr, dict_indvars::Dict{Symbol, Int64}, dict_depvars::Dict{Symbol, Int64}, chain::Chain{Tuple{Dense{typeof(tanh), Matrix{Float32}, Vector{Float32}}, Dense{typeof(tanh), Matrix{Float32}, Vector{Float32}}, Dense{typeof(identity), Matrix{Float32}, Vector{Float32}}}}, eltypeθ::Type, strategy::GridTraining)
@ NeuralPDE ~/.julia/packages/NeuralPDE/RwOGF/src/pinns_pde_solve.jl:209
[3] _transform_expression(ex::Expr, dict_indvars::Dict{Symbol, Int64}, dict_depvars::Dict{Symbol, Int64}, chain::Chain{Tuple{Dense{typeof(tanh), Matrix{Float32}, Vector{Float32}}, Dense{typeof(tanh), Matrix{Float32}, Vector{Float32}}, Dense{typeof(identity), Matrix{Float32}, Vector{Float32}}}}, eltypeθ::Type, strategy::GridTraining) (repeats 5 times)
@ NeuralPDE ~/.julia/packages/NeuralPDE/RwOGF/src/pinns_pde_solve.jl:224
[4] transform_expression
@ ~/.julia/packages/NeuralPDE/RwOGF/src/pinns_pde_solve.jl:155 [inlined]
[5] parse_equation(eq::Equation, dict_indvars::Dict{Symbol, Int64}, dict_depvars::Dict{Symbol, Int64}, chain::Chain{Tuple{Dense{typeof(tanh), Matrix{Float32}, Vector{Float32}}, Dense{typeof(tanh), Matrix{Float32}, Vector{Float32}}, Dense{typeof(identity), Matrix{Float32}, Vector{Float32}}}}, eltypeθ::Type, strategy::GridTraining)
@ NeuralPDE ~/.julia/packages/NeuralPDE/RwOGF/src/pinns_pde_solve.jl:268
[6] build_symbolic_loss_function(eqs::Equation, indvars::Vector{Symbol}, depvars::Vector{Symbol}, dict_indvars::Dict{Symbol, Int64}, dict_depvars::Dict{Symbol, Int64}, phi::NeuralPDE.var"#255#257"{UnionAll, Flux.var"#61#63"{Chain{Tuple{Dense{typeof(tanh), Matrix{Float32}, Vector{Float32}}, Dense{typeof(tanh), Matrix{Float32}, Vector{Float32}}, Dense{typeof(identity), Matrix{Float32}, Vector{Float32}}}}}}, derivative::NeuralPDE.var"#260#261", chain::Chain{Tuple{Dense{typeof(tanh), Matrix{Float32}, Vector{Float32}}, Dense{typeof(tanh), Matrix{Float32}, Vector{Float32}}, Dense{typeof(identity), Matrix{Float32}, Vector{Float32}}}}, initθ::Vector{Float32}, strategy::GridTraining; eq_params::SciMLBase.NullParameters, param_estim::Bool, default_p::Nothing, bc_indvars::Vector{Symbol})
@ NeuralPDE ~/.julia/packages/NeuralPDE/RwOGF/src/pinns_pde_solve.jl:345
[7] #build_loss_function#162
@ ~/.julia/packages/NeuralPDE/RwOGF/src/pinns_pde_solve.jl:446 [inlined]
[8] build_loss_function(eqs::Equation, _indvars::Vector{Num}, _depvars::Vector{Sym{SymbolicUtils.FnType{Tuple, Real}, Nothing}}, phi::NeuralPDE.var"#255#257"{UnionAll, Flux.var"#61#63"{Chain{Tuple{Dense{typeof(tanh), Matrix{Float32}, Vector{Float32}}, Dense{typeof(tanh), Matrix{Float32}, Vector{Float32}}, Dense{typeof(identity), Matrix{Float32}, Vector{Float32}}}}}}, derivative::NeuralPDE.var"#260#261", chain::Chain{Tuple{Dense{typeof(tanh), Matrix{Float32}, Vector{Float32}}, Dense{typeof(tanh), Matrix{Float32}, Vector{Float32}}, Dense{typeof(identity), Matrix{Float32}, Vector{Float32}}}}, initθ::Vector{Float32}, strategy::GridTraining; bc_indvars::Nothing, eq_params::SciMLBase.NullParameters, param_estim::Bool, default_p::Nothing)
@ NeuralPDE ~/.julia/packages/NeuralPDE/RwOGF/src/pinns_pde_solve.jl:430
[9] build_loss_function(eqs::Equation, _indvars::Vector{Num}, _depvars::Vector{Sym{SymbolicUtils.FnType{Tuple, Real}, Nothing}}, phi::Function, derivative::Function, chain::Chain{Tuple{Dense{typeof(tanh), Matrix{Float32}, Vector{Float32}}, Dense{typeof(tanh), Matrix{Float32}, Vector{Float32}}, Dense{typeof(identity), Matrix{Float32}, Vector{Float32}}}}, initθ::Vector{Float32}, strategy::GridTraining)
@ NeuralPDE ~/.julia/packages/NeuralPDE/RwOGF/src/pinns_pde_solve.jl:428
Is that not expected?
Not expected. That looks like a namespacing error in the NeuralPDE parsing. @KirillZubov could you take a look?
@vaishnavtv
NeuralPDE on this moment doesn't work with vector input f([xV, xα])
Differential
work only with neural network(NN) input
@parameters xV
@variables u(..)
Differential(xV)(u(xV))
where u is a variable that is predicted by NN
It needs @register to function together with AD derivative.
cb = function (p,l)
println("Current loss is: $l")
return false
end
using Zygote
ff(x, y) = x^2 + 2y
der_ff(x, y) = Zygote.gradient(x->ff(x, y), x)[1]
ff(2, 2)
der_ff(3, 2) == 2*3
@parameters xV, xα
normV(xV) = xV^2 #some
f(xV, xα) = xV + 0.0031009487651181677normV(xV)*cos(xα)
der_f(xV, xα) = Zygote.gradient(x->f(x, xα), xV)[1]
der_f(2.0, 1.0)
(der_f).(rand(1,10), rand(1,10))
@register der_f(xV, xα)
der_f(xV, xα) ~ 0.
@variables u(..)
eq = der_f(xV, xα) ~ u(xV,xα)
bcs = [u(0.,0.) ~ 1.0]
# Space and time domains
domains = [xV ∈ Interval(0.0,1.0),xα ∈ Interval(0.0,1.0)]
# Neural network
chain = FastChain(FastDense(2,12,Flux.σ),FastDense(12,1))
initθ = Float64.(DiffEqFlux.initial_params(chain))
strategy_ = NeuralPDE.GridTraining(0.1)
discretization = NeuralPDE.PhysicsInformedNN(chain,
strategy_;
init_params = initθ,
)
@named pde_system = PDESystem(eq,bcs,domains,[xV,xα],[u(xV,xα)])
prob = NeuralPDE.discretize(pde_system,discretization)
sym_prob = NeuralPDE.symbolic_discretize(pde_system,discretization)
prob.f.f.loss_function(initθ)
res = GalacticOptim.solve(prob, ADAM(0.1);cb=cb, maxiters=100)
if it is symbolic form, ModelingToolkit handles it:
@parameters x
@variables u(..)
D = Differential(x)
f(x) = x^2
julia> eq_l = D(u(x) + fx(x))
Differential(x)(u(x) + x^2)
julia> expand_derivatives(eq_l)
Differential(x)(u(x)) + 2x
Another possible case is @register's function.
Issue: need to support derivative for @register functions
So AD isn't called by default when parsing through @register
-ed function expressions which require derivatives?
The AD derivatives of such functions have to be manually created and registered for MTK to work?
Please let me know if this is not a NeuralPDE issue but a MTK or a Symbolics issue.
NeuralPDE parser issue with derivative of @register's function
@parameters x
@variables u(..)
D = Differential(x)
f(x) = x^2
@register f(x)
eq = D(u(x) + f(x)) ~ 0.
expand_derivatives(eq)
bcs = [u(0) ~ 0.0]
domains = [x ∈ Interval(0.0,1.0)]
chain = FastChain(FastDense(1,12,Flux.σ),FastDense(12,1))
strategy_ = NeuralPDE.GridTraining(0.1)
discretization = NeuralPDE.PhysicsInformedNN(chain,strategy_)
@named pde_system = PDESystem(eq,bcs,domains,[x],[u(x)])
julia> prob = NeuralPDE.discretize(pde_system,discretization)
ERROR: KeyError: key f not found