NeuralPDE.jl icon indicating copy to clipboard operation
NeuralPDE.jl copied to clipboard

Random noise as an initial condition

Open oashour opened this issue 3 years ago • 10 comments

I am moving this over here from the Discourse as per Chris's request. A short summary is below.

Essentially I'm trying to use random noise as the initial condition of my simulation of the PDEs shown in that thread. Currently, I have

bcs = [c₁(0,x) ~ rand(),
       c₂(0,x) ~ rand(),
       c₁(t,0) ~ 0.,
       c₂(t,0) ~ 0.,
       c₁(t,30) ~ 0.,
       c₂(t,30) ~ 0.]

But this generates a single value as the initial condition instead of changing it every time as per Chris. I am not sure how to proceed, something was mentioned about solving over all initial conditions and using that as an extra dimension but I don't quite understand this, I'm very new to PINNs.

Thanks!

oashour avatar Jan 17 '22 17:01 oashour

As I mentioned in the Discourse thread, I think the right thing to do is to train with the initial condition being a different independent variable and then sample across that dimension.

Maybe @KirillZubov can help get a demo of that together.

ChrisRackauckas avatar Jan 18 '22 01:01 ChrisRackauckas

@oashour you need @register_symbolic https://symbolics.juliasymbolics.org/dev/manual/functions/

@parameters t x
@variables c₁(..) c₂(..)
rand_f(x) = rand(size(x)...)
@register_symbolic rand_f(x)
Base.Broadcast.broadcasted(::typeof(rand_f), x) = rand_f(x)
vec_ = ones(1,10)
rand_f(vec_)
"""
julia> rand_f(vec_)
1×10 Matrix{Float64}:
 0.384208  0.0996561  0.990935  0.9898  0.479029  0.516312  0.635055  0.480348  0.789933  0.665574

"""

eq = c₁(t,x)+c₂(t,x) ~ 0
bcs = [c₁(0,x) ~ rand_f(x),
       c₂(0,x) ~ rand_f(x),
       c₁(t,0) ~ 0.,
       c₂(t,0) ~ 0.,
       c₁(t,30) ~ 0.,
       c₂(t,30) ~ 0.]
"""
output:
6-element Vector{Equation}:
 c₁(0, x) ~ rand_f(x)
 c₂(0, x) ~ rand_f(x)
 c₁(t, 0) ~ 0.0
 c₂(t, 0) ~ 0.0
 c₁(t, 30) ~ 0.0
 c₂(t, 30) ~ 0.0
"""

domains = [t ∈ Interval(0.0,1.0),
           x ∈ Interval(0.0,1.0)]

chains = [FastChain(FastDense(2,12,Flux.tanh),FastDense(12,12,Flux.tanh),FastDense(12,1)),
         FastChain(FastDense(2,12,Flux.tanh),FastDense(12,12,Flux.tanh),FastDense(12,1))]

initθ = map(c -> Float64.(c), DiffEqFlux.initial_params.(chains))
grid_strategy = NeuralPDE.GridTraining(0.1)
discretization = NeuralPDE.PhysicsInformedNN(chains,
                                             grid_strategy;
                                             init_params = initθ)

@named pde_system = PDESystem(eq,bcs,domains,[t,x],[c₁(t,x),c₂(t,x)])
prob = NeuralPDE.discretize(pde_system,discretization)
sym_prob = NeuralPDE.symbolic_discretize(pde_system,discretization)

prob.f(reduce(vcat, initθ), nothing)
res = GalacticOptim.solve(prob, ADAM(0.1); maxiters=500)

#two dim case
rand_f(x,y) = rand(size(vcat(x,y))...)
@register_symbolic rand_f(x,y)
Base.Broadcast.broadcasted(::typeof(rand_f), x,y) = rand_f(x,y)
rand_f.(ones(1,10),ones(1,10))

KirillZubov avatar Jan 20 '22 12:01 KirillZubov

I'm not sure that's the right way to handle it. Why not handle the random factor as deterministic for the training and then sample on the results?

ChrisRackauckas avatar Jan 20 '22 12:01 ChrisRackauckas

@ChrisRackauckas do you mean something like this algorithm?

for every x in x_vector create a random vector - rand(N), N - size of vector
xi ~ rand(N) 
vector of results of initial conditions - c₁(0,xi) ~ rand(N) for each xi in x_vector
loss_fuctions  =  vector of loss_fuction(c₁(0,xi), rand(N)) for each xi in x_vector
choice with the best convergence from the vector of loss_fuctions

KirillZubov avatar Jan 20 '22 13:01 KirillZubov

No. Think of it as a PDE with 2 more dimensions. Make the initial condition be c₁(0,x,r1,r2) ~ r1, c₂(0,x,r1,r2) ~ r2 and solve. Then every slice NN(:,:,r1,r2) is a solution with the given initial condition. This means that NN(:,:,rand(2)...) is a quick and effective way to sample solutions with random initial conditions.

ChrisRackauckas avatar Jan 20 '22 13:01 ChrisRackauckas

why do we use random numbers how initial conditions? is it an unknown initial condition and we try to pick one from a random set which is better? sort of Monte Carlo method.

Trying to figure out this problem https://discourse.julialang.org/t/diffeqoperators-and-differentialequations-solving-a-coupled-system-of-pdes/74722

KirillZubov avatar Jan 20 '22 13:01 KirillZubov

There is no randomness in the way I described the training.

ChrisRackauckas avatar Jan 20 '22 13:01 ChrisRackauckas

that is, we initialize random vectors r1, r2 once and use them without generating new ones at each iteration?

KirillZubov avatar Jan 20 '22 13:01 KirillZubov

No, they are not random. They are independent variables, like t or x.

ChrisRackauckas avatar Jan 20 '22 13:01 ChrisRackauckas

ok, got it

KirillZubov avatar Jan 20 '22 13:01 KirillZubov