SymbolicUtils.jl
SymbolicUtils.jl copied to clipboard
Add a ModelingToolkit benchmark
@YingboMa suggested benchmarking factorized_W in https://github.com/SciML/DiffEqProblemLibrary.jl/blob/master/src/ode/ode_simple_nonlinear_prob.jl
It would be nice if PkgBenchmark can detect a Project.toml in the benchmark folder.
see https://github.com/JuliaCI/PkgBenchmark.jl/issues/114
using ModelingToolkit, LinearAlgebra, SparseArrays
# Define the constants for the PDE
const α₂ = 1.0
const α₃ = 1.0
const β₁ = 1.0
const β₂ = 1.0
const β₃ = 1.0
const r₁ = 1.0
const r₂ = 1.0
const _DD = 100.0
const γ₁ = 0.1
const γ₂ = 0.1
const γ₃ = 0.1
const N = 8
const X = reshape([i for i in 1:N for j in 1:N],N,N)
const Y = reshape([j for i in 1:N for j in 1:N],N,N)
const α₁ = 1.0.*(X.>=4*N/5)
const Mx = Tridiagonal([1.0 for i in 1:N-1],[-2.0 for i in 1:N],[1.0 for i in 1:N-1])
const My = copy(Mx)
Mx[2,1] = 2.0
Mx[end-1,end] = 2.0
My[1,2] = 2.0
My[end,end-1] = 2.0
# Define the initial condition as normal arrays
@variables du[1:N,1:N,1:3] u[1:N,1:N,1:3] MyA[1:N,1:N] AMx[1:N,1:N] DA[1:N,1:N]
# Define the discretized PDE as an ODE function
function f(du,u,p,t)
A = @view u[:,:,1]
B = @view u[:,:,2]
C = @view u[:,:,3]
dA = @view du[:,:,1]
dB = @view du[:,:,2]
dC = @view du[:,:,3]
mul!(MyA,My,A)
mul!(AMx,A,Mx)
@. DA = _DD*(MyA + AMx)
@. dA = DA + α₁ - β₁*A - r₁*A*B + r₂*C
@. dB = α₂ - β₂*B - r₁*A*B + r₂*C
@. dC = α₃ - β₃*C + r₁*A*B - r₂*C
end
f(du,u,nothing,0.0)
jac = sparse(ModelingToolkit.jacobian(vec(du),vec(u),simplify=false))
serialjac = eval(ModelingToolkit.build_function(vec(jac),u)[2])
multithreadedjac = eval(ModelingToolkit.build_function(vec(jac),u,parallel=ModelingToolkit.MultithreadedForm())[2])
distributedjac = eval(ModelingToolkit.build_function(vec(jac),u,parallel=ModelingToolkit.DistributedForm())[2])
_jac = similar(jac,Float64)
@btime serialjac(_jac,_u)
@btime multithreadedjac(_jac,_u)
@btime distributedjac(_jac,_u)
by @ChrisRackauckas https://github.com/SciML/ModelingToolkit.jl/pull/348
using ModelingToolkit, LinearAlgebra, SparseArrays
# Define the constants for the PDE
const α₂ = 1.0
const α₃ = 1.0
const β₁ = 1.0
const β₂ = 1.0
const β₃ = 1.0
const r₁ = 1.0
const r₂ = 1.0
const _DD = 100.0
const γ₁ = 0.1
const γ₂ = 0.1
const γ₃ = 0.1
const N = 32
const X = reshape([i for i in 1:N for j in 1:N],N,N)
const Y = reshape([j for i in 1:N for j in 1:N],N,N)
const α₁ = 1.0.*(X.>=4*N/5)
const Mx = Tridiagonal([1.0 for i in 1:N-1],[-2.0 for i in 1:N],[1.0 for i in 1:N-1])
const My = copy(Mx)
Mx[2,1] = 2.0
Mx[end-1,end] = 2.0
My[1,2] = 2.0
My[end,end-1] = 2.0
# Define the discretized PDE as an ODE function
function f!(du,u,p,t)
A = @view u[:,:,1]
B = @view u[:,:,2]
C = @view u[:,:,3]
dA = @view du[:,:,1]
dB = @view du[:,:,2]
dC = @view du[:,:,3]
mul!(MyA,My,A)
mul!(AMx,A,Mx)
@. DA = _DD*(MyA + AMx)
@. dA = DA + α₁ - β₁*A - r₁*A*B + r₂*C
@. dB = α₂ - β₂*B - r₁*A*B + r₂*C
@. dC = α₃ - β₃*C + r₁*A*B - r₂*C
end
# Define the initial condition as normal arrays
@variables du[1:N,1:N,1:3] u[1:N,1:N,1:3] MyA[1:N,1:N] AMx[1:N,1:N] DA[1:N,1:N] p t
f!(du,u,nothing,0.0)
jac = sparse(ModelingToolkit.jacobian(vec(du),vec(u),simplify=false))
doesn't seem to complete on recent versions of SymbolicUtils.
Ok most of the time for that right now is taken up by expand_derivatives. It seems that https://github.com/SciML/ModelingToolkit.jl/pull/485 actually slows it down?? Which is confusing me so much.
Yeah that's confusing... one thing we should really be doing though is computing the sparsity pattern before differentiation. That might help a ton, so sparsejacobian might just cut out 99% of the work.
Huh nice! Wait a second, I'm just revalidating my statement about that PR. will let you know.
How would you go about finding the sparsity without computing the derivative? Just occursin?
I think occursin to get the list of (I,J) and then call sparse(I,J,ones(length(I))
Never mind, so I was testing on it on different sets of expressions, the PR actually speeds it up by a lot.
I'm going to try and do the sparsejacobian then.
Ok sparsejacobian is like 4-5x better. Not 99% unfortunately. A large amount of time is spent in occursin.
julia> @time jac = ModelingToolkit.sparsejacobian(vec(sdu),vec(u),simplify=false);
67.852660 seconds (93.16 M allocations: 1.475 GiB, 2.16% gc time)
julia> jac
3072×3072 SparseMatrixCSC{Expression,Int64} with 13184 stored entries:
A large amount of time is spent in occursin.
Oh, I was operating under the assumption that was free 😆 . I wonder if it could be made better. 5x isn't bad though.
Yikes ok that timing is when I'm using sdu = simplify.(du), but without the initial simplify it's 127 seconds. I guess simplification helps occursin. I'm so surprised that simplify is so much faster compared to occursin. It could be because of the closure magic you're doing with it. I'll investigate.
lmao time taken just by occursin: 121.449976 seconds (162.92 M allocations: 2.449 GiB, 2.14% gc time)
One might say it essentially occurs in occursin
Dicts are awesome for this purpose:
julia> @time jac = ModelingToolkit.jacobian_sparsity(vec(du),vec(u))
0.401049 seconds (2.62 M allocations: 61.215 MiB, 2.42% gc time)
3072×3072 SparseMatrixCSC{Bool,Int64} with 13184 stored entries:
wwwwwwwwwaaaaaaaaaaattttttttttt?
https://github.com/SciML/ModelingToolkit.jl/pull/498/files#diff-2fe5177ac5b669b0517a4fea2d9d0d7aR51
Gotta love dicts.