Catalyst.jl
Catalyst.jl copied to clipboard
[v14 ready] Improve noise scaling implementation
One of the more niche features is the ability to (linearly) scale the noise magnitude in the CEL: https://docs.sciml.ai/Catalyst/stable/catalyst_applications/advanced_simulations/#Scaling-the-noise-magnitude-in-the-chemical-Langevin-equations. The current interface was a bit "clanky".
New new interface (as opposed to the new one from last year that was meant to override the old one).
Reactions can be given an noise_scaling
metadata. This is any expression, which is multiplied by the noise terms generated by that reaction in the CLE. Any normal expression is valid. E.g
using Catalyst, StochasticDiffEq, Plots
# Note that that `0 <--> Y` is two reactions, so metadata have to be supplied to each.
rn = @reaction_network begin
(p,d), 0 <--> X
(p,d), 0 <--> Y, ([noise_scaling=0.0], [noise_scaling=0.0])
end
u0 = [:X => 1.0, :Y => 1.0]
tspan = (0.0, 10.0)
ps = [:p => 10.0, :d => 1.0]
sprob = SDEProblem(rn, u0, tspan, ps)
sol = solve(sprob, ImplicitEM())
plot(sol)
If you want to have a parameter as a noise scaling parameter (like previously), you have to create this separately:
rn = @reaction_network begin
@parameters η
(p,d), 0 <--> X
(p,d), 0 <--> Y, ([noise_scaling=η], [noise_scaling=η])
end
u0 = [:X => 1.0, :Y => 1.0]
tspan = (0.0, 10.0)
ps = [:p => 10.0, :d => 1.0, :η => 0.1]
sprob = SDEProblem(rn, u0, tspan, ps)
sol = solve(sprob, ImplicitEM())
plot(sol)
For cases where you want the same noise scaling in all reaction, I have added a @default_noise_scaling
option:
rn = @reaction_network begin
@parameters η
@default_noise_scaling η
(p,d), 0 <--> X
end
u0 = [:X => 1.0]
tspan = (0.0, 10.0)
ps = [:p => 10.0, :d => 1.0, :η => 0.0]
sprob = SDEProblem(rn, u0, tspan, ps)
sol = solve(sprob, ImplicitEM())
plot(sol)
You can still supply noise_scaling
metadata to individual reactions when using @default_noise_scaling
(I'm which case the default value is overridden).
I considered to have a option @default_noise_scaling_parameter η
that combines
@parameters η
@default_noise_scaling η
but decided to keep it like this.
Since we are having a breaking release, and probably the last one in a while, I am deprecating the previous syntax fully (only an error message left).
I haven't written tests. If this update looks good I will fix that and update the docs as well.
Adding this macro seems like a good idea, and less clunkier than the current approach. So I'd be in favor of that.
However, at a high level I don't like allowing mixed symbol and symbolic maps. It just worries me that it could lead to issues, and/or user expectations that won't be satisfied in other contexts. I also don't like having to slurp up Any
components for the first mapping argument, the way that is being done here. If anything, the longer term goal should be to get away from needing Symbol
s as MTK's interface (hopefully) improves.
If one is going to have to use symbolics for a bunch of the components one should just use them for all the components. The real issue you are encountering is that MTK doesn't yet support proper vector variables. If it did then you could just use :η
. Since that is hopefully going to be supported soon I'd prefer we just wait for that and keep the non-mixed interface for mappings for now. The easiest way to proceed for now is probably to recommend users do
using Catalyst, StochasticDiffEq, Plots
rn = @reaction_network begin
@noise_scaling_parameters η[1:4]
(p, d), 0 <--> X
(p, d), 0 <--> Y
end
rn = complete(rn) # this allows rn.symbolic in mappings
u0map = [rn.X => 10.0, rn.Y => 10.0]
tspan = (0.0, 10.0)
pmap = [rn.p => 10.0, rn.d => 1., rn.η[1]=>0.1, rn.η[2]=>0.1, rn.η[3]=>1., rn.η[4]=>1.]
prob = SDEProblem(rn, u0map, tspan, pmap)
sol = solve(prob, ImplicitEM())
plot(sol; ylimit=(0.0,20.0))
In fact, we should probably consider at some point changing the docs to use this approach of completing a model and rn.X
instead of the symbol approach.
If you prefer not to allow mixed symbols/symbolics I could change that. generally, I think I prefer using :X
, rather than rn.X
, mostly because
σV_network = @reaction_network begin
v0 + hill(σᵛ,v,K,n), ∅ → (σᵛ+A)
deg, (σᵛ,A,Aσᵛ) → ∅
(kB,kD), A + σᵛ ↔ Aσᵛ
L*kC*Aσᵛ, Aσᵛ ⟾ σᵛ
end
p = [σV_network.v0 = 0.1, σV_network.v = 2.5, σV_network.K = 60, n = 2, σV_network.kD = 5, σV_network.kB = 10, σV_network.kC = 0.05, σV_network.deg = 0.01; σV_network.L = 0.]
becomes really messy. Either way, that's a discussion we can take another time.
Yeah, why don't you get the functionality in here without changing the symbol mappings. Then we can discuss ways to improve their construction as a separate issue / PR.
Also, and I haven't looked at your code closely, but can you make sure to still support the old style noise scaling for now, but add a deprecation warning that it will be removed for this new version in a future release? We should give an error if someone uses both. I'd prefer not to have to make a breaking release until we have some larger changes built up.
Currently, we don't change the previous way. The previous way is basically how it is handled internally anyway, always felt a bit hacky and incomplete. The new version uses the interface of the previous way.
OK, but if someone passes it the previous way via a kwarg it should now give a deprecation warning but still work, unless they pass it both the new way and the old way, in which case an error should be printed. That way we aren't making a breaking change, but we are warning users the old interface will go away in the future.
New update:
- Attempting to scale noise by providing a kwarg to
SDEProblem
now throws a deprecation warning (or an error, if one also defined some parameters as noise scaling paraemters. - Support for mixed symbols/symbolics in input vectors dropped.
- The previous test was superfluous given the new one, so I dropped it. Of the 3 remaining tests, I changed it so that only one actually runs a simulation.
I'm sorry, but I am stuck. I've been trying all evening, but have not been able to reuse the species approach, having tried several versions on it. I think I am almost there, although it required a new @noise_scaling_parameters
macro (similar to the @species
macro) which is what is pasted at the end. However, the resulting output is not enough as the result is not considered a parameter
.
Here is my macro:
macro noise_scaling_parameters(ex...)
vars = Symbolics._parse_parameters(:parameters, Real, ex)
# vector of symbols that get defined
lastarg = vars.args[end]
# start adding metadata statements where the vector of symbols was previously declared
idx = length(vars.args)
resize!(vars.args, idx + length(lastarg.args) + 1)
for sym in lastarg.args
vars.args[idx] = :($sym = ModelingToolkit.wrap(setmetadata(ModelingToolkit.value($sym),
Catalyst.NoiseScalingParameter,
true)))
idx += 1
end
# check nothing was declared isconstantspecies
ex = quote end
vars.args[idx] = ex
idx += 1
# put back the vector of the new species symbols
vars.args[idx] = lastarg
esc(vars)
end
however
using Catalyst
@parameters k1 k2
@variables t
@species X1(t) X2(t)
reaction = Reaction(k1, [X1], [X2], [1], [1])
@named rs = ReactionSystem([reaction], t, [X1, X2], [k1, k2, η])
gives ERROR: ArgumentError: η is not a parameter.
.
I try putting a println(vars)
at the end of @noise_scaling_parameters
and @species
to see how they are different. Here @noise_scaling_parameters η
gives
begin
η = (identity)((Symbolics.wrap)((SymbolicUtils.setmetadata)((Sym){Real}(:η), Symbolics.VariableSource, (:parameters, :η))))
η = ModelingToolkit.wrap(setmetadata(ModelingToolkit.value(η), Catalyst.NoiseScalingParameter, true))
begin
#= /home/torkelloman/.julia/dev/Catalyst/src/reaction_network.jl:696 =#
end
[η]
end
and @species K(t)
:
begin
K = (identity)((Symbolics.wrap)((SymbolicUtils.setmetadata)(((Sym){(SymbolicUtils.FnType){NTuple{1, Any}, Real}}(:K))((Symbolics.value)(t)), Symbolics.VariableSource, (:variables, :K))))
K = ModelingToolkit.wrap(setmetadata(ModelingToolkit.value(K), Catalyst.VariableSpecies, true))
begin
#= /home/torkelloman/.julia/dev/Catalyst/src/reaction_network.jl:147 =#
all(!(Catalyst.isconstant) ∘ ModelingToolkit.value, [K]) || throw(ArgumentError("isconstantspecies metadata can only be used with parameters."))
end
[K]
end
but I cannot interpret this very well. Symbolics.VariableSource
could have something with variables to do, but I am unsure how to change this.
We should make it a normal standalone macro too though. Maybe get that working first?
I can take a closer look later today.
@TorkelE I think the issue is you aren't calling _parse_vars
correct for parameters. See how the @parameters
macro is defined in MTK:
https://github.com/SciML/ModelingToolkit.jl/blob/1aa480673305854e889be2211eb5b0e4487038ae/src/parameters.jl#L58-L62
- Tests for standalone
@noise_scaling_parameters
macro?- Test that interpolation works with noise scaling? Both inside and outside the DSL? i.e.
a = :b @noise_scaling_parameters $(a) # check that the symbolic b works / becomes available?
Wil lfix this
Your comments should be fixed.
I have also updated to check that @noise_scaling_parameters
work in normal code, and that this can be used to create a programmatic network with scaling. One of the previous tests have been updated to use this approach.
Regarding interpolation to make this work:
η_stored = :η
rn = @reaction_network begin
@noise_scaling_parameters :(η_stored)
(p, d), 0 <--> X
end
this is currently not possible due to https://github.com/SciML/Catalyst.jl/issues/692.
I also wanted to use interpolation for the @noise_scaling_parameters
test. However, this is not possible:
η_stored = :η
@variables t
@species X1(t) X2(t)
@noise_scaling_parameters $(η_stored)=0.0
@parameters k1 k2
r1 = Reaction(k1,[X1],[X2],[1],[1])
r2 = Reaction(k2,[X2],[X1],[1],[1])
@named noise_scaling_network = ReactionSystem([r1, r2], t, [X1, X2], [k1, k2, η])
since simply running η
in the main scope causes an error (preventing me to pass it into noise_scaling_network
), again, see https://github.com/SciML/Catalyst.jl/issues/692.
I could add tests for these two cases and mark them as broken, but until we figure https://github.com/SciML/Catalyst.jl/issues/692 out I think that is as far as we can get.
The second (non-DSL) example should work -- it works with @parameters
. As I said in the issue the first is ok to not have working as it isn't how we set the DSL up to work with @parameters
either.
If it is desired behaviour I will do the second one with
η_stored = :η
@variables t
@species X1(t) X2(t)
ηs = @noise_scaling_parameters $(η_stored)=0.0
@parameters k1 k2
r1 = Reaction(k1,[X1],[X2],[1],[1])
r2 = Reaction(k2,[X2],[X1],[1],[1])
@named noise_scaling_network = ReactionSystem([r1, r2], t, [X1, X2], [k1, k2, ηs[1]])
Since we are more or less set on syntax, I have rewritten the noise scaling documentation given this update. I have also:
- Updated the test as described.
- Added an additional getter;
noise_scaling_parameters
which returns a system's noise scaling parameters. The main reason is that when there are several reactions, the order is used to decide which parameter scales the noise of which reaction, so I wanted a good function allowing people to confirm this order. This method is not ideal (but the case is rather niche). At some point we might be able to add metadata for reactions or something. At least when creating reaction systems programmatically, it might be possible to make a nicer interface for how to do this, but I'd say that's beyond the scope of this PR.
For the second case, I tried to add API for noise_scaling_parameters()
to the docs. I have never really done this properly before, so you might want to check it is as you want it to be.
Many of the tests seem pretty removed from actually testing that the scaling is working. Is there not something more precise that could be tested? For example, manually creating the same symbolic SDEs and checking if the generated noise functions evaluate to the same values on specific inputs (checking if the symbolic noise terms are the same would be even better I'd think)? That seems a more direct test than calling solve like is done here.
Could add some direct tests as well. The idea was basically to check in real simulations that the level of noise is affected by the noise scaling. But I agree that direct test of the function would work as well.
Finally, I don't like this mixing of noise scaling being metadata at both system and reaction levels and the changes it then induces in the DSL code (making it more complicated for such a niche case). It seems like if one wants to apply a uniform scaling to all reactions and not specify it when creating reactions, this would be better to be implemented as a system transformation function in the API as I mention in my comments.
Agreed here, the interpretation is not necessarily unique like this.
Just want to point out that we never mix system and reaction level metadata, that is just a misunderstanding. The only case where we (currently) use reaction metadata is one where you (in almost all cases) want the same value across all reactions. Hence, I added an option
@default_noise_scaling η
which just sets that input (here η
) as the noise scaling across all reactions. In case someone actually gives a noise_scaling
metadata to one reaction, that will override this default. This is only something we have on system creation (and currently only for noise scaling). It is never anything that is stored on the system level.
(This is something I've used personally to avoid writing the same noise scaling for 30 reactions in a stress response model)
See above responses. I will have another rearead through the doc thing. Generally, I didn't spend as much time on it as I should, because I had already had to re-write it once unnecessarily, and this is a part that I plan to rewrite once we redo parts of the docs (and I didn't want to spend too much effort on it until that is done).
But why not add a default via a system transformation instead of a special DSL command. The former would have the benefit of preserving equivalence between DSL and symbolic construction, and work for both. The system transformation could be something like newsys = scale_reaction_noise(sys, scaling; keep_default_scaling = true)
or such, and would then be usable on any ReactionSystem
no matter how it is created.
Even if we keep the DSL command, I'd advocate that it simply correspond to applying that transformation function after creating the system within the DSL instead of merging scalings via processing of the DSL expressions. It seems cleaner to me, and ensures consistency between DSL and symbolic approaches (plus it seems like a useful function to have in general).
I just don't see why this specific thing should be a system transformation, and not:
- Adding metadata to a system.
- Adding default values to species/parameters
- Specifying that a parameter should be of the
Int64
type
and so on. Right now there is no part of model creation that is handled as a system transformation (if we ignore composition) rather than just being something you define at system creation. There might be some good reason why just noise scaling in SDE should be the exception, but I think in that case we'd have to discuss that in person.
We can add a default argument for noise scaling in the ReactionSystem
if you want
Just to address some of these points:
Adding metadata to a system.
We don't use system metadata? (I know MTK has a notion of it, but I don't think it is used by anything except JuliaHub internal stuff?)
Adding default values to species/parameters
This was an MTK choice long ago, so not really something we control. I agree it is confusing to have three ways to specify the default value of a parameter or initial condition (i.e. via metadata to the symbolic, via defaults, or via the mappings). But that just supports the idea that we should only handle noise scaling in one way (i.e. as metadata within reactions), and not try to also handle it with, for example, a conflicting system property that complicates things more.
Specifying that a parameter should be of the Int64 type
This is a new choice made by MTK, and not something we control in Catalyst, but I don't see how it relates to this PR at all. I personally would have preferred if this decision was only made at problem construction time, as it reduces flexibility in modeling as you've observed in some of the broken tests I think. But it is done so we need to deal with it.
Here is why I prefer making this a transformation:
- It doesn't change the DSL setup you've created in that one can still have the default scaling command. It just eliminates some messy expression hacking since one doesn't need to reconcile that command with scalings passed via reaction metadata. (i.e. instead at the end of the DSL we just conclude with generating a
sys = apply_default_scaling(sys, scaling)
or such command if needed. - It immediately works for
ReactionSystem
s one is just given, including systems coming from SBML or that are symbolically created. This seems like a nice gain. - I think it is much easier to understand, reason, and in the future update, a simple function that loops over reactions and adds the appropriate metadata scaling to them vs. trying to read through and update DSL expression hacking code. I generally would strongly prefer that we minimize DSL updates that require expression hacking to keep the code as compact and simple as possible (within the constraint of now trying to maintain 1:1 compatibility with the symbolic interface). To be honest, I'd like to at some point redo the DSL code and, if possible, switch to what has been built for MTK to reduce what we need to maintain here even more.
Anyways, we can discuss more on Wednesday if needed.
I agree in principle, but think this is a combination of refactoring and new features, and something that can be saved for a later day. Yes, discussing it on Wednesday is probably better to ensure that we are on the same page.