ModelingToolkit.jl
ModelingToolkit.jl copied to clipboard
Float64 constants result in Float64 results
Describe the bug 🐞
Currently, if a system has a constant in it (e.g. @constants g = 9.8), and the constant is defined as a float64 number, the result of the simulation will also be a float64, regardless of the type of u0 and tspan. For example:
using ModelingToolkit
using ModelingToolkit: t_nounits as t, D_nounits as D
using OrdinaryDiffEq
@constants g = 9.8
@variables X(t)
eqs = [D(X) ~ g]
@mtkbuild osys = System(eqs, t)
u0 = [X => 1.0f0]
oprob = ODEProblem{false}(osys, u0, 1.0f0)
sol = solve(oprob)
typeof(oprob[X]) # Float64
eltype(sol[X]) # Float64
However, if the constant is a float32, then the result can also be a float32 if everything else is defined properly:
@constants g = 9.8f0
eqs = [D(X) ~ g]
@mtkbuild osys = System(eqs, t)
oprob = ODEProblem(osys, u0, 1.0f0)
sol = solve(oprob)
typeof(oprob[X]) # Float32
eltype(sol[X]) # Float32
This seems problematic for large systems with equations distributed across different packages, because it effectively prevents the precision from being selected by the end user. Although this seems like a problem someone else may have already run into, so let me know if there's just something I'm missing here.
Environment (please complete the following information):
ModelingToolkit v10.2.0
I'm not sure we should override Julia's type promotion semantics, but it would make sense for any "standard" (type not chosen) constant to update to make the u0/p (tunable) type
It's a little iffy, since we don't want to promote constants to Duals (which u0/tunables can be). The reason this promotes is that the codegen promotes all floating point values in the operating point. Note that this can be overridden by providing (in this case) g => 9.8f0 to ODEProblem. I'm actually a little surprised that has a u0 of Float32 at all, iirc it should just be Float64. I'll try to think of a nice API/behavior for this.
Thanks! I didn't realize that the constants could be overridden, so that could be a good short term fix, but obviously that wouldn't be practical for a large scale model. The only alternative that I can think of without a change to MTK would be for the end user to make every constant Float32 when it is originally defined, and then it would be promoted to float64 when appropriate, but that also seems difficult to enforce in a large code base.
DiffEqBase.value(u0) is what we want for the constants.