Catalyst.jl icon indicating copy to clipboard operation
Catalyst.jl copied to clipboard

Parameter fitting to multiple experiments not working

Open lisa-schroeder opened this issue 1 year ago • 8 comments

I am new to Julia, so maybe I am overseeing an important point here. I am trying to fit parameters to multiple experiments. Fitting it to one set of experimental values works nicely, as well as simulating the concentrations over time for different starting concentrations using an ensemble. Somehow, if I am trying to fit this simple example in the following to multiple experiments, as explained in the documentation of catalyst (which is then forwarded to DiffEqParamEstim), the optimization is not working. I had to insert p = prob.ps[[k1, k_1, k2, k3]] into the prob_func, because the parameter p needs to be a dict (although what I generated here is not a dict, but still working?). This does not work for the fitting, because here the prob cannot be indexed like that. I could generate a dict using prob.p[1].value , but weirdly this doesn't stay consistent: During one optimization, prob.p within prob_func sometimes is a Vector{Float64}, and sometimes a Vector{ForwardDiff.Dual{ForwardDiff.Tag{DifferentiationInterface.FixTail{DiffEqParamEstim.var"#29#30"{Nothing, typeof(DiffEqParamEstim.STANDARD_PROB_GENERATOR),... So how can I fit my parameters to the data of multiple experiments? Thank you for helping me!

using Catalyst
using Plots
using DiffEqParamEstim
using Optimization
using OptimizationOptimJL
using DifferentialEquations
using ForwardDiff

# creating reaction network
@parameters k1, k_1, k2, k3
rn = @reaction_network begin
    (k1, k_1), A + B <--> C
    k2, C --> D + E
    k3, D --> F
end

# Define initial conditions and parameters
u0 = [:A => 1.0, :B => 0.5, :C => 0.0, :D => 0.0, :E => 0.0, :F => 0.0]
ps = [:k1 => 1.0, :k_1 => 0.1, :k2 => 2.0, :k3 => 0.8]

# Generate synthetic data
tspan = (0.0, 10.0)
prob = ODEProblem(rn, u0, tspan, ps)
N = 3
u0_all = [[:A => 1.0, :B => 0.5, :C => 0.0, :D => 0.0, :E => 0.0, :F => 0.0],
    [:A => 1.2, :B => 0.5, :C => 0.0, :D => 0.0, :E => 0.0, :F => 0.0],
    [:A => 1.0, :B => 0.3, :C => 0.0, :D => 0.0, :E => 0.0, :F => 0.0]]

function prob_func(prob, i, repeat)
    p = prob.ps[[k1, k_1, k2, k3]]
    ODEProblem(rn, u0_all[i], tspan, p)
end
enprob = EnsembleProblem(prob, prob_func = prob_func)

# Check above does what we want
sim = solve(enprob, Tsit5(), trajectories = N)
plot(sim)

# Generate dataset
data_times = 0.0:0.1:10.0
sim = solve(enprob, Tsit5(), trajectories = N, saveat = data_times)
data = Array(sim)

# Building a loss function
losses = [L2Loss(data_times, data[:, :, i]) for i in 1:N]
loss(sim) = sum(losses[i](sim[i]) for i in 1:N)
loss(sim)       # should be zero

# Generate data with other parameters and check loss
u0_dummy = [:A => 1.0, :B => 1.0, :C => 1.0, :D => 1.0, :E => 1.0, :F => 1.0]
ps_dummy = [:k1 => 1.0, :k_1 => 1.0, :k2 => 1.0, :k3 => 1.0]

prob = ODEProblem(rn, u0_dummy, tspan, ps_dummy)
enprob = EnsembleProblem(prob, prob_func = prob_func)
sim = solve(enprob, Tsit5(), trajectories = N, saveat = data_times)
loss(sim)       # should be non-zero

# put into build_loss_objective
obj = build_loss_objective(enprob, Tsit5(), loss, Optimization.AutoForwardDiff(), trajectories = N, saveat = data_times)
lower = zeros(4)
upper = 2.0*ones(4)
optprob = OptimizationProblem(obj, [1.9, 0.2, 1.1, 0.7], lb = lower, ub = upper)
result = solve(optprob, Fminbox(BFGS()))

lisa-schroeder avatar Oct 07 '24 09:10 lisa-schroeder