NeuralPDE.jl icon indicating copy to clipboard operation
NeuralPDE.jl copied to clipboard

Improve Parsing with Symbolics

Open xtalax opened this issue 1 year ago • 8 comments

The current setup was written before we had all the useful tools present in Symbolics.jl, this PR takes advantage of these and the PDE helper functions present in PDEBase

xtalax avatar Apr 12 '23 16:04 xtalax

I recently completely rewrote the parsing in https://github.com/YichengDWu/Sophon.jl/pull/204, and it is now close to the ideal state I mentioned in #687. I used pattern matching from MacroTools there to transform expressions. Although I don't understand what you are doing here, I guess converting it to Symbolics-based rewriting would be straightforward. I hope this helps with this PR.

YichengDWu avatar May 20 '23 19:05 YichengDWu

@ChrisRackauckas @YichengDWu Ok, here is what I expect will work, I'd appreciate a review and perhaps a comment on how mixed deriv rules can be generated without being exhaustive. @YichengDWu can you clarify what needs to change for mixed derivs to work as in your PR?

We have lost functionality for multioutput and integrals, to be re-added in future PRs, is this acceptable?

xtalax avatar May 26 '23 14:05 xtalax

I'm not sure if I understand

@rule $(Differential(~x)^(~d::isinteger)(~w)) => derivative(phi, u, ~x, get_εs(~w), ~d, θ)

It looks like we should pass the coordinate to derivative not just ~x here? Anyway, mimicking this rule, mixed derivatives can be handle like the following (but not exactly):

@rule $(Differential(~x1)^(~d1::isinteger)($(Differential(~x2)^(~d2::isinteger)(~w))))
=>  derivative(phi, (cord_, θ_, phi_) ->derivative(phi_, u, ~x2, get_ε(~w), ~d2, θ_), u, ~x1, get_ε(~w), ~d1, θ)

Can you symbolic_discretize some test PDE to inspect the generated expression?

YichengDWu avatar May 26 '23 18:05 YichengDWu

We have lost functionality for multioutput and integrals, to be re-added in future PRs, is this acceptable?

Or move the code into a submodule, and use Preferences to switch between the old and new backends, until the day when the old parsing backend is completely replaced?

YichengDWu avatar May 30 '23 23:05 YichengDWu

This changes fundamentally a large part of the codebase, it will add a lot of complexity to do this, I'd rather keep them on different major versions instead

xtalax avatar May 31 '23 13:05 xtalax

If anyone needs something backported to the major that's fine, but I'd like to just out with the old and in with the new. On a major it's fine to be breaking, and I think if we have a functionality regression for a major change in the maintainership of this package it's fine. Honestly the integro-differential equation piece needs a bit more though on part of the definition anyways, so we should come back to it but I wouldn't let that hold it up. And the multioutput is a very minor feature that I wouldn't say is an important one.

ChrisRackauckas avatar May 31 '23 13:05 ChrisRackauckas

The loss functions generated are incorrect.

Running this example:

@parameters θ
@variables u(..)
Dθ = Differential(θ)

# 1D ODE
eq = Dθ(u(θ)) ~ θ^3 + 2 * θ + (θ^2) * ((1 + 3 * (θ^2)) / (1 + θ + (θ^3))) -
                u(θ) * (θ + ((1 + 3 * (θ^2)) / (1 + θ + θ^3)))

# Initial and boundary conditions
bcs = [u(0.0) ~ 1.0]

# Space and time domains
domains = [θ ∈ Interval(0.0, 1.0)]

# Neural network
chain = Lux.Chain(Lux.Dense(1, 12, Flux.σ), Lux.Dense(12, 1))

discretization = NeuralPDE.PhysicsInformedNN(chain, GridTraining(0.1))

@named pdesys = PDESystem(eq, bcs, domains, [θ], [u(θ)])
prob = NeuralPDE.discretize(pdesys, discretization)
prob.f.f(prob.u0, nothing) # Calculate loss

Calculating the loss is erroring out.

I am getting Runtime Generated Function in https://github.com/xtalax/NeuralPDE.jl/blob/parsing/src/loss_function_generation.jl#L127 as

ex = :(function (var"##arg#8771009302963811065", θ_SYMBOL, phi, var"##arg#5964805160111424296")
      #= /home/sathvikbhagavan/.julia/packages/SymbolicUtils/ssQsQ/src/code.jl:373 =#
      #= /home/sathvikbhagavan/.julia/packages/SymbolicUtils/ssQsQ/src/code.jl:374 =#
      #= /home/sathvikbhagavan/.julia/packages/SymbolicUtils/ssQsQ/src/code.jl:375 =#
      begin
          θ = var"##arg#8771009302963811065"[1]
          nothing
      end
  end)

which does not look correct. I inserted show statements in https://github.com/xtalax/NeuralPDE.jl/blob/parsing/src/loss_function_generation.jl#L117 and https://github.com/xtalax/NeuralPDE.jl/blob/parsing/src/loss_function_generation.jl#L118 and found:

expr = 82570.18592591035(-phi(SymbolicUtils.BasicSymbolic{Real}[-6.055454452393343e-6 + θ], θ_SYMBOL) + phi(SymbolicUtils.BasicSymbolic{Real}[6.055454452393343e-6 + θ], θ_SYMBOL)) + ((-1 - 3(θ^2))*(θ^2)) / (1 + θ + θ^3) - 2θ + ((1 + 3(θ^2)) / (1 + θ + θ^3) + θ)*phi(SymbolicUtils.BasicSymbolic{Real}[θ], θ_SYMBOL) - (θ^3)
expr = nothing

So doing expr = swch(expr) in https://github.com/xtalax/NeuralPDE.jl/blob/parsing/src/loss_function_generation.jl#L118 is returning nothing as there is no switch in the first expression.

So, there might be a bug in how the symbolic rules are written. I am not familiar with SymbolicUtils that much to point out the bug.

@xtalax

cc: @ChrisRackauckas

sathvikbhagavan avatar Jan 19 '24 09:01 sathvikbhagavan

Try wrapping the rule in a chain before postwalk, and maybe also a vector would be my guess

xtalax avatar Feb 09 '24 15:02 xtalax

Superseded by #877, Closing this.

sathvikbhagavan avatar Jul 19 '24 07:07 sathvikbhagavan