NeuralPDE.jl icon indicating copy to clipboard operation
NeuralPDE.jl copied to clipboard

functional inverse problem

Open YichengDWu opened this issue 1 year ago • 5 comments

I have an inverse problem where the parameter function is a function of the solution, what is the correct way to implement it?

using ModelingToolkit, NeuralPDE, Lux, Random, NNlib
using Optimization, OptimizationOptimJL, Zygote
import ModelingToolkit: Interval

@parameters x
@variables u(..), b(..)

f(x) = (1+(sin(π*x)^2))/(1+2*(sin(π*x)^2)) * π^2 * sin(π*x) + sin(π*x)
Dxx = Differential(x)^2
eq = [-b(u(x)) * Dxx(u(x)) + u(x) ~ f(x)]

bcs = [u(0) ~ 0, u(1) ~ 0]
domain = [x ∈ Interval(0.0, 1.0)]

@named pdesys = PDESystem(eq, bcs, domain, [x], [u(x), b(u(x))])

# the neural networks
chain_u =  Chain(Dense(1, 20, tanh), Dense(20,20, tanh), Dense(20,1))
chain_b =  Chain(Dense(1, 20, tanh), Dense(20,20, tanh), Dense(20,1))

x_data = reshape(LinRange(0,1,101)[2:end-1],1,:) |> collect
u_data = sin.(π .* x_data)

function additional_loss(phi, θ, p)
    return sum(abs2, phi[1](x_data, θ.depvar.u).-u_data)/length(x_data)
end

dx = 0.01
discretization = NeuralPDE.PhysicsInformedNN([chain_u, chain_b],
                                             NeuralPDE.GridTraining(dx),
                                             additional_loss=additional_loss)  

prob = NeuralPDE.discretize(pdesys, discretization)                
callback = function (p,l)
    println("Current loss is: $l")
    return false
end
res = Optimization.solve(prob, BFGS(), callback = callback, maxiters=500)

YichengDWu avatar Jul 07 '22 19:07 YichengDWu

julia> prob = NeuralPDE.discretize(pdesys, discretization)
ERROR: MethodError: no method matching nameof(::Term{Real, Base.ImmutableDict{DataType, Any}})
Closest candidates are:
  nameof(::Sym) at C:\Users\Luffy\.julia\packages\SymbolicUtils\vnuIf\src\types.jl:144
  nameof(::ModelingToolkit.AbstractSystem) at C:\Users\Luffy\.julia\packages\ModelingToolkit\tMgaW\src\systems\abstractsystem.jl:139
  nameof(::DataType) at C:\Users\Luffy\AppData\Local\Programs\Julia-1.7.2\share\julia\base\reflection.jl:223
  ...
Stacktrace:
 [1] (::NeuralPDE.var"#40#41")(argument::Term{Real, Base.ImmutableDict{DataType, Any}})
   @ NeuralPDE .\none:0
 [2] iterate
   @ .\generator.jl:47 [inlined]
 [3] collect(itr::Base.Generator{Vector{Term{Real, Base.ImmutableDict{DataType, Any}}}, NeuralPDE.var"#40#41"})
   @ Base .\array.jl:724
 [4] get_vars(indvars_::Vector{Num}, depvars_::Vector{Num})
   @ NeuralPDE C:\Users\Luffy\.julia\packages\NeuralPDE\iNhvg\src\symbolic_utilities.jl:353
 [5] symbolic_discretize(pde_system::PDESystem, discretization::PhysicsInformedNN{GridTraining{Float64}, Nothing, Vector{NeuralPDE.Phi{Chain{NamedTuple{(:layer_1, :layer_2, :layer_3), Tuple{Dense{true, typeof(tanh_fast), typeof(Lux.glorot_uniform), typeof(Lux.zeros32)}, Dense{true, typeof(tanh_fast), typeof(Lux.glorot_uniform), typeof(Lux.zeros32)}, Dense{true, typeof(identity), typeof(Lux.glorot_uniform), typeof(Lux.zeros32)}}}}, NamedTuple{(:layer_1, :layer_2, :layer_3), Tuple{NamedTuple{(), Tuple{}}, NamedTuple{(), Tuple{}}, NamedTuple{(), Tuple{}}}}}}, typeof(NeuralPDE.numeric_derivative), Bool, typeof(additional_loss), Nothing, Nothing, Base.Pairs{Symbol, Union{}, Tuple{}, NamedTuple{(), Tuple{}}}})
   @ NeuralPDE C:\Users\Luffy\.julia\packages\NeuralPDE\iNhvg\src\discretize.jl:420
 [6] discretize(pde_system::PDESystem, discretization::PhysicsInformedNN{GridTraining{Float64}, Nothing, Vector{NeuralPDE.Phi{Chain{NamedTuple{(:layer_1, :layer_2, :layer_3), Tuple{Dense{true, typeof(tanh_fast), typeof(Lux.glorot_uniform), typeof(Lux.zeros32)}, Dense{true, typeof(tanh_fast), typeof(Lux.glorot_uniform), typeof(Lux.zeros32)}, Dense{true, typeof(identity), typeof(Lux.glorot_uniform), typeof(Lux.zeros32)}}}}, NamedTuple{(:layer_1, :layer_2, :layer_3), Tuple{NamedTuple{(), Tuple{}}, NamedTuple{(), Tuple{}}, NamedTuple{(), Tuple{}}}}}}, typeof(NeuralPDE.numeric_derivative), Bool, typeof(additional_loss), Nothing, Nothing, Base.Pairs{Symbol, Union{}, Tuple{}, NamedTuple{(), Tuple{}}}})
   @ NeuralPDE C:\Users\Luffy\.julia\packages\NeuralPDE\iNhvg\src\discretize.jl:669
 [7] top-level scope
   @ REPL[106]:1
 [8] top-level scope
   @ C:\Users\Luffy\.julia\packages\CUDA\tTK8Y\src\initialization.jl:52

YichengDWu avatar Jul 07 '22 19:07 YichengDWu

Changing b(u(x)) to b(x) would work, but that is not how i want it to be

YichengDWu avatar Jul 07 '22 19:07 YichengDWu

I'm not sure this case will parse. It probably needs a few changes to be supported. I can't think of a quick workaround either, so I'll say this is unsupported right now.

I'm going to post about how the parsing and codegen in this repo should be changed, and that would make it easy to support this, but it won't happen "soon", so I'd just write the PINN out by hand here if you need it. But this is a good thing to target, and the code you posted is probably what the front end should be.

ChrisRackauckas avatar Jul 07 '22 19:07 ChrisRackauckas

I'll just move on if it's not supported yet.

YichengDWu avatar Jul 07 '22 20:07 YichengDWu

But yeah, a very good feature to have.

YichengDWu avatar Jul 08 '22 02:07 YichengDWu