[Bug] Reverse over forward AD issues with ADNLP
@ocots @PierreMartinon Same kind of error with AD as in #384 tough here the problem seems to be directly related to using a constant in the computation: this fails (Cannot determine ordering of Dual tags ForwardDiff.Tag)
using OptimalControl
using NLPModelsIpopt
o = @def begin
v ∈ R, variable
t ∈ [0, 1], time
x ∈ R², state
u ∈ R, control
-1 ≤ v ≤ 1
x(0) - [-1, v] == [0, 0] # OK: the boundary contraint may involve the variable
x(1) == [0, 0]
ẋ(t) == [x₂(t), u(t)]
∫( 0.5u(t)^2 ) → min
end
s = solve(o)
this also fails (same error)
o = @def begin
v ∈ R, variable
t ∈ [0, 1], time
x ∈ R², state
u ∈ R, control
-1 ≤ v ≤ 1
x(0) - [-one(v), v] == [0, 0] # OK: the boundary contraint may involve the variable
x(1) == [0, 0]
ẋ(t) == [x₂(t), u(t)]
∫( 0.5u(t)^2 ) → min
end
while this is OK
o = @def begin
v ∈ R, variable
t ∈ [0, 1], time
x ∈ R², state
u ∈ R, control
-1 ≤ v ≤ 1
x(0) - [-1 + 0v, v] == [0, 0] # OK: the boundary contraint may involve the variable
x(1) == [0, 0]
ẋ(t) == [x₂(t), u(t)]
∫( 0.5u(t)^2 ) → min
end
@PierreMartinon I thought that using one as above (second version; see also zero) solved the problem, but this is not the case. any clue?
@amontoison @gdalle input welcome 🤞🏽
@jbcaillau If you can provide the full error, it will help.
@amontoison full log attached (copy paste of the first cell above)
@jbcaillau If you can provide the full error, it will help.
julia> using OptimalControl
julia> using NLPModelsIpopt
julia> o = @def begin
v ∈ R, variable
t ∈ [0, 1], time
x ∈ R², state
u ∈ R, control
-1 ≤ v ≤ 1
x(0) - [-1, v] == [0, 0] # OK: the boundary contraint may involve the variable
x(1) == [0, 0]
ẋ(t) == [x₂(t), u(t)]
∫( 0.5u(t)^2 ) → min
end
v ∈ R, variable
t ∈ [0, 1], time
x ∈ R², state
u ∈ R, control
-1 ≤ v ≤ 1
x(0) - [-1, v] == [0, 0]
x(1) == [0, 0]
ẋ(t) == [x₂(t), u(t)]
∫(0.5 * u(t) ^ 2) → min
The optimal control problem is of the form:
minimize J(x, u, v) = ∫ f⁰(t, x(t), u(t), v) dt, over [0, 1]
subject to
ẋ(t) = f(t, x(t), u(t), v), t in [0, 1] a.e.,
ϕ₋ ≤ ϕ(x(0), x(1), v) ≤ ϕ₊,
v₋ ≤ v ≤ v₊,
where x(t) ∈ R², u(t) ∈ R and v ∈ R.
Declarations (* required):
╭────────┬────────┬──────────┬──────────┬───────────┬────────────┬─────────────╮
│ times* │ state* │ control* │ variable │ dynamics* │ objective* │ constraints │
├────────┼────────┼──────────┼──────────┼───────────┼────────────┼─────────────┤
│ V │ V │ V │ V │ V │ V │ V │
╰────────┴────────┴──────────┴──────────┴───────────┴────────────┴─────────────╯
julia> s = solve(o)
This is Ipopt version 3.14.17, running with linear solver MUMPS 5.7.3.
Number of nonzeros in equality constraint Jacobian...: 3006
Number of nonzeros in inequality constraint Jacobian.: 0
Number of nonzeros in Lagrangian Hessian.............: 251
Total number of variables............................: 1005
variables with only lower bounds: 0
variables with lower and upper bounds: 1
variables with only upper bounds: 0
Total number of equality constraints.................: 755
Total number of inequality constraints...............: 0
inequality constraints with only lower bounds: 0
inequality constraints with lower and upper bounds: 0
inequality constraints with only upper bounds: 0
ERROR: Cannot determine ordering of Dual tags ForwardDiff.Tag{ReverseDiff.var"#130#133"{typeof(-), ForwardDiff.Dual{ForwardDiff.Tag{ADNLPModels.var"#ψ#72"{CTDirect.var"#14#16"{CTDirect.DOCP{CTDirect.Trapeze, Model{CTModels.TimesModel{CTModels.FixedTimeModel{Int64}, CTModels.FixedTimeModel{Int64}}, CTModels.StateModel, CTModels.ControlModel, CTModels.VariableModel, var"##248#3", CTModels.LagrangeObjectiveModel{var"##252#4"}, CTModels.ConstraintsModel{Tuple{Vector{Real}, CTModels.var"#path_cons_nl!#59"{Int64, Vector{Int64}, Tuple{}}, Vector{Real}}, Tuple{Vector{Real}, CTModels.var"#boundary_cons_nl!#61"{Int64, Vector{Int64}, Tuple{var"##242#2", var"##237#1"}}, Vector{Real}}, Tuple{Vector{Real}, Vector{Int64}, Vector{Real}}, Tuple{Vector{Real}, Vector{Int64}, Vector{Real}}, Tuple{Vector{Real}, Vector{Int64}, Vector{Real}}, Dict{Symbol, Tuple{Symbol, Union{Function, OrdinalRange{<:Int64}}, AbstractVector{<:Real}, AbstractVector{<:Real}}}}}}}}, Float64}, Float64, 1}}, ForwardDiff.Dual{ForwardDiff.Tag{ADNLPModels.var"#ψ#72"{CTDirect.var"#14#16"{CTDirect.DOCP{CTDirect.Trapeze, Model{CTModels.TimesModel{CTModels.FixedTimeModel{Int64}, CTModels.FixedTimeModel{Int64}}, CTModels.StateModel, CTModels.ControlModel, CTModels.VariableModel, var"##248#3", CTModels.LagrangeObjectiveModel{var"##252#4"}, CTModels.ConstraintsModel{Tuple{Vector{Real}, CTModels.var"#path_cons_nl!#59"{Int64, Vector{Int64}, Tuple{}}, Vector{Real}}, Tuple{Vector{Real}, CTModels.var"#boundary_cons_nl!#61"{Int64, Vector{Int64}, Tuple{var"##242#2", var"##237#1"}}, Vector{Real}}, Tuple{Vector{Real}, Vector{Int64}, Vector{Real}}, Tuple{Vector{Real}, Vector{Int64}, Vector{Real}}, Tuple{Vector{Real}, Vector{Int64}, Vector{Real}}, Dict{Symbol, Tuple{Symbol, Union{Function, OrdinalRange{<:Int64}}, AbstractVector{<:Real}, AbstractVector{<:Real}}}}}}}}, Float64}, Float64, 1}} and ForwardDiff.Tag{ADNLPModels.var"#ψ#72"{CTDirect.var"#14#16"{CTDirect.DOCP{CTDirect.Trapeze, Model{CTModels.TimesModel{CTModels.FixedTimeModel{Int64}, CTModels.FixedTimeModel{Int64}}, CTModels.StateModel, CTModels.ControlModel, CTModels.VariableModel, var"##248#3", CTModels.LagrangeObjectiveModel{var"##252#4"}, CTModels.ConstraintsModel{Tuple{Vector{Real}, CTModels.var"#path_cons_nl!#59"{Int64, Vector{Int64}, Tuple{}}, Vector{Real}}, Tuple{Vector{Real}, CTModels.var"#boundary_cons_nl!#61"{Int64, Vector{Int64}, Tuple{var"##242#2", var"##237#1"}}, Vector{Real}}, Tuple{Vector{Real}, Vector{Int64}, Vector{Real}}, Tuple{Vector{Real}, Vector{Int64}, Vector{Real}}, Tuple{Vector{Real}, Vector{Int64}, Vector{Real}}, Dict{Symbol, Tuple{Symbol, Union{Function, OrdinalRange{<:Int64}}, AbstractVector{<:Real}, AbstractVector{<:Real}}}}}}}}, Float64}
Stacktrace:
[1] value
@ ~/.julia/packages/ForwardDiff/UBbGT/src/dual.jl:98 [inlined]
[2] (::ForwardDiff.var"#76#77"{ForwardDiff.Tag{…}})(d::ForwardDiff.Dual{ForwardDiff.Tag{…}, ForwardDiff.Dual{…}, 1})
@ ForwardDiff ~/.julia/packages/ForwardDiff/UBbGT/src/apiutils.jl:6
[3] value!(f::ForwardDiff.var"#76#77"{…}, r::DiffResults.ImmutableDiffResult{…}, x::ForwardDiff.Dual{…})
@ DiffResults ~/.julia/packages/DiffResults/YLo25/src/DiffResults.jl:168
[4] extract_value!
@ ~/.julia/packages/ForwardDiff/UBbGT/src/apiutils.jl:5 [inlined]
[5] derivative!
@ ~/.julia/packages/ForwardDiff/UBbGT/src/derivative.jl:49 [inlined]
[6] binary_scalar_forward_exec!(f::typeof(-), output::ReverseDiff.TrackedReal{…}, input::Tuple{…}, cache::Base.RefValue{…})
@ ReverseDiff ~/.julia/packages/ReverseDiff/rKZaG/src/derivatives/scalars.jl:114
[7] scalar_forward_exec!(instruction::ReverseDiff.ScalarInstruction{typeof(-), Tuple{…}, ReverseDiff.TrackedReal{…}, Base.RefValue{…}})
@ ReverseDiff ~/.julia/packages/ReverseDiff/rKZaG/src/derivatives/scalars.jl:88
[8] forward_exec!(instruction::ReverseDiff.ScalarInstruction{typeof(-), Tuple{…}, ReverseDiff.TrackedReal{…}, Base.RefValue{…}})
@ ReverseDiff ~/.julia/packages/ReverseDiff/rKZaG/src/tape.jl:82
[9] ForwardExecutor
@ ~/.julia/packages/ReverseDiff/rKZaG/src/api/tape.jl:76 [inlined]
[10] (::FunctionWrappers.CallWrapper{Nothing})(f::ReverseDiff.ForwardExecutor{ReverseDiff.ScalarInstruction{…}})
@ FunctionWrappers ~/.julia/packages/FunctionWrappers/Q5cBx/src/FunctionWrappers.jl:65
[11] macro expansion
@ ~/.julia/packages/FunctionWrappers/Q5cBx/src/FunctionWrappers.jl:137 [inlined]
[12] do_ccall
@ ~/.julia/packages/FunctionWrappers/Q5cBx/src/FunctionWrappers.jl:125 [inlined]
[13] FunctionWrapper
@ ~/.julia/packages/FunctionWrappers/Q5cBx/src/FunctionWrappers.jl:144 [inlined]
[14] forward_pass!(compiled_tape::ReverseDiff.CompiledTape{ReverseDiff.GradientTape{ADNLPModels.var"#ψ#72"{…}, Tuple{…}, ReverseDiff.TrackedReal{…}}})
@ ReverseDiff ~/.julia/packages/ReverseDiff/rKZaG/src/api/tape.jl:124
[15] seeded_forward_pass!
@ ~/.julia/packages/ReverseDiff/rKZaG/src/api/tape.jl:42 [inlined]
[16] gradient!(result::Tuple{Vector{…}, Vector{…}}, tape::ReverseDiff.CompiledTape{ReverseDiff.GradientTape{…}}, input::Tuple{Vector{…}, Vector{…}})
@ ReverseDiff ~/.julia/packages/ReverseDiff/rKZaG/src/api/gradients.jl:79
[17] ∇l!
@ ~/.julia/packages/ADNLPModels/nR2jK/src/sparse_hessian.jl:215 [inlined]
[18] ∇l!
@ ~/.julia/packages/ADNLPModels/nR2jK/src/sparse_hessian.jl:214 [inlined]
[19] sparse_hess_coord!(b::ADNLPModels.SparseReverseADHessian{…}, x::Vector{…}, obj_weight::Float64, y::SubArray{…}, vals::Vector{…})
@ ADNLPModels ~/.julia/packages/ADNLPModels/nR2jK/src/sparse_hessian.jl:351
[20] hess_coord!(b::ADNLPModels.SparseReverseADHessian{…}, nlp::ADNLPModels.ADNLPModel{…}, x::Vector{…}, y::SubArray{…}, obj_weight::Float64, vals::Vector{…})
@ ADNLPModels ~/.julia/packages/ADNLPModels/nR2jK/src/sparse_hessian.jl:375
[21] hess_coord!(nlp::ADNLPModels.ADNLPModel{Float64, Vector{…}, Vector{…}}, x::Vector{Float64}, y::Vector{Float64}, vals::Vector{Float64}; obj_weight::Float64)
@ ADNLPModels ~/.julia/packages/ADNLPModels/nR2jK/src/nlp.jl:706
[22] hess_coord!
@ ~/.julia/packages/ADNLPModels/nR2jK/src/nlp.jl:695 [inlined]
[23] (::NLPModelsIpopt.var"#eval_h#5"{…})(x::Vector{…}, rows::Vector{…}, cols::Vector{…}, σ::Float64, λ::Vector{…}, values::Vector{…})
@ NLPModelsIpopt ~/.julia/packages/NLPModelsIpopt/OGzSv/src/NLPModelsIpopt.jl:132
[24] _Eval_H_CB(n::Int32, x_ptr::Ptr{…}, ::Int32, obj_factor::Float64, m::Int32, lambda_ptr::Ptr{…}, ::Int32, nele_hess::Int32, iRow::Ptr{…}, jCol::Ptr{…}, values_ptr::Ptr{…}, user_data::Ptr{…})
@ Ipopt ~/.julia/packages/Ipopt/3Ojbq/src/C_wrapper.jl:129
[25] #5
@ ~/.julia/packages/Ipopt/3Ojbq/src/C_wrapper.jl:407 [inlined]
[26] disable_sigint(f::Ipopt.var"#5#6"{Ipopt.IpoptProblem, Base.RefValue{Float64}})
@ Base ./c.jl:167
[27] IpoptSolve
@ ~/.julia/packages/Ipopt/3Ojbq/src/C_wrapper.jl:406 [inlined]
[28] solve!(solver::IpoptSolver, nlp::ADNLPModels.ADNLPModel{…}, stats::SolverCore.GenericExecutionStats{…}; callback::Function, kwargs::@Kwargs{…})
@ NLPModelsIpopt ~/.julia/packages/NLPModelsIpopt/OGzSv/src/NLPModelsIpopt.jl:263
[29] solve!
@ ~/.julia/packages/NLPModelsIpopt/OGzSv/src/NLPModelsIpopt.jl:184 [inlined]
[30] #solve!#16
@ ~/.julia/packages/SolverCore/vTpnV/src/solver.jl:40 [inlined]
[31] solve!
@ ~/.julia/packages/SolverCore/vTpnV/src/solver.jl:38 [inlined]
[32] solve_docp(solver_backend::CTDirect.IpoptBackend, docp::CTDirect.DOCP{…}, nlp::ADNLPModels.ADNLPModel{…}; display::Bool, max_iter::Int64, tol::Float64, print_level::Int64, mu_strategy::String, linear_solver::String, kwargs::@Kwargs{})
@ CTSolveExtIpopt ~/.julia/packages/CTDirect/rvI7x/ext/CTSolveExtIpopt.jl:55
[33] solve(::Model{…}; display::Bool, grid_size::Int64, disc_method::Symbol, time_grid::Nothing, init::Nothing, adnlp_backend::Symbol, kwargs::@Kwargs{})
@ CTDirect ~/.julia/packages/CTDirect/rvI7x/src/solve.jl:75
[34] solve(::Model{…})
@ CTDirect ~/.julia/packages/CTDirect/rvI7x/src/solve.jl:38
[35] solve(::Model{…}; kwargs::@Kwargs{})
@ OptimalControl ~/.julia/packages/OptimalControl/SiL76/src/solve.jl:96
[36] solve(::Model{…})
@ OptimalControl ~/.julia/packages/OptimalControl/SiL76/src/solve.jl:87
[37] top-level scope
@ REPL[4]:1
Some type information was truncated. Use `show(err)` to see complete types.
julia>
Hum it is the forward over reverse mode for the Hessian that fails. If you don't use backend=:optimized, it should work because it's forward over forward by default.
I should investigate but maybe better to spend time on the support of DI.jl in ADNLPModels.jl. Guillaume is maybe doing something smarter than us with the tape.
@jbcaillau Do you still have the error with option adnlp_backend=:default added to the solve call ?
@jbcaillau Do you still have the error with option
adnlp_backend=:defaultadded to the solve call ?
It works (but as you said it is slower so cannot be the default for us).
Should be explained on the doc.
@PierreMartinon @ocots @amontoison defaulting adnlp_backend=:default indeed works. as pointed by @ocots 6 times slower on that example (o2 is equivalent but raises no bug as not single constant is used, check -1 + 0 * v instead of merely -1 😬).
@jbcaillau Do you still have the error with option
adnlp_backend=:defaultadded to the solve call ?It works (but as you said it is slower so cannot be the default for us).
julia> o1 = @def begin
v ∈ R, variable
t ∈ [0, 1], time
x ∈ R², state
u ∈ R, control
-1 ≤ v ≤ 1
x(0) - [-1, v] == [0, 0] # OK: the boundary contraint may involve the variable
x(1) == [0, 0]
ẋ(t) == [x₂(t), u(t)]
∫( 0.5u(t)^2 ) → min
end
julia> @btime solve(o1; adnlp_backend=:default) # no more exception, indeed
159.537 ms (502305 allocations: 237.32 MiB)
julia> o2 = @def begin
v ∈ R, variable
t ∈ [0, 1], time
x ∈ R², state
u ∈ R, control
-1 ≤ v ≤ 1
x(0) - [-1 + 0 * v, v] == [0, 0] # OK: the boundary contraint may involve the variable
x(1) == [0, 0]
ẋ(t) == [x₂(t), u(t)]
∫( 0.5u(t)^2 ) → min
end
julia> @btime solve(o2)
27.333 ms (113391 allocations: 15.55 MiB) # and same objective reached up to
@ocots adding a Know issues section (pointing to issues) at the end of the "define a problem" tutorial #480
Should be explained on the doc.
@PierreMartinon @amontoison any of you has tried to use ADNLP on a problem with a constraint function involving a vectorial function having one constant component to replicate the issue?
@jbcaillau
I was cleaning my emails and checked again this issue. Why do you use some Vector{Real} ?
It is not a concrete type for Julia.
ERROR: Cannot determine ordering of Dual tags ForwardDiff.Tag{ReverseDiff.var"#130#133"{typeof(-), ForwardDiff.Dual{ForwardDiff.Tag{ADNLPModels.var"#ψ#72"{CTDirect.var"#14#16"{CTDirect.DOCP{CTDirect.Trapeze, Model{CTModels.TimesModel{CTModels.FixedTimeModel{Int64}, CTModels.FixedTimeModel{Int64}}, CTModels.StateModel, CTModels.ControlModel, CTModels.VariableModel, var"##248#3", CTModels.LagrangeObjectiveModel{var"##252#4"}, CTModels.ConstraintsModel{Tuple{Vector{Real}, CTModels.var"#path_cons_nl!#59"{Int64, Vector{Int64}, Tuple{}}, Vector{Real}}, Tuple{Vector{Real}, CTModels.var"#boundary_cons_nl!#61"{Int64, Vector{Int64}, Tuple{var"##242#2", var"##237#1"}}, Vector{Real}}, Tuple{Vector{Real}, Vector{Int64}, Vector{Real}}, Tuple{Vector{Real}, Vector{Int64}, Vector{Real}}, Tuple{Vector{Real}, Vector{Int64}, Vector{Real}}, Dict{Symbol, Tuple{Symbol, Union{Function, OrdinalRange{<:Int64}}, AbstractVector{<:Real}, AbstractVector{<:Real}}}}}}}}, Float64}, Float64, 1}}, ForwardDiff.Dual{ForwardDiff.Tag{ADNLPModels.var"#ψ#72"{CTDirect.var"#14#16"{CTDirect.DOCP{CTDirect.Trapeze, Model{CTModels.TimesModel{CTModels.FixedTimeModel{Int64}, CTModels.FixedTimeModel{Int64}}, CTModels.StateModel, CTModels.ControlModel, CTModels.VariableModel, var"##248#3", CTModels.LagrangeObjectiveModel{var"##252#4"}, CTModels.ConstraintsModel{Tuple{Vector{Real}, CTModels.var"#path_cons_nl!#59"{Int64, Vector{Int64}, Tuple{}}, Vector{Real}}, Tuple{Vector{Real}, CTModels.var"#boundary_cons_nl!#61"{Int64, Vector{Int64}, Tuple{var"##242#2", var"##237#1"}}, Vector{Real}}, Tuple{Vector{Real}, Vector{Int64}, Vector{Real}}, Tuple{Vector{Real}, Vector{Int64}, Vector{Real}}, Tuple{Vector{Real}, Vector{Int64}, Vector{Real}}, Dict{Symbol, Tuple{Symbol, Union{Function, OrdinalRange{<:Int64}}, AbstractVector{<:Real}, AbstractVector{<:Real}}}}}}}}, Float64}, Float64, 1}} and ForwardDiff.Tag{ADNLPModels.var"#ψ#72"{CTDirect.var"#14#16"{CTDirect.DOCP{CTDirect.Trapeze, Model{CTModels.TimesModel{CTModels.FixedTimeModel{Int64}, CTModels.FixedTimeModel{Int64}}, CTModels.StateModel, CTModels.ControlModel, CTModels.VariableModel, var"##248#3", CTModels.LagrangeObjectiveModel{var"##252#4"}, CTModels.ConstraintsModel{Tuple{Vector{Real}, CTModels.var"#path_cons_nl!#59"{Int64, Vector{Int64}, Tuple{}}, Vector{Real}}, Tuple{Vector{Real}, CTModels.var"#boundary_cons_nl!#61"{Int64, Vector{Int64}, Tuple{var"##242#2", var"##237#1"}}, Vector{Real}}, Tuple{Vector{Real}, Vector{Int64}, Vector{Real}}, Tuple{Vector{Real}, Vector{Int64}, Vector{Real}}, Tuple{Vector{Real}, Vector{Int64}, Vector{Real}}, Dict{Symbol, Tuple{Symbol, Union{Function, OrdinalRange{<:Int64}}, AbstractVector{<:Real}, AbstractVector{<:Real}}}}}}}}, Float64}
i think everything is eventually set to Float64 and type stable. @ocots and @PierreMartinon do you confirm?
Where do you that we use Vector{Real}?
In the type of the structure allocated by ForwardDiff:
CTModels.ConstraintsModel{Tuple{Vector{Real}, ...}
Thanks, I have changed this.
Thanks, I have changed this.
good. does it remove the issue with reverse diff?
I tried to test tonight but I can use the new version of CTModels without updating a bunch of other packages.
I tried to JET.jl on your packages to isolate the issue and I see many sources of type instabilities.
I suggest to add a unit like we did in SparseMatrixColorings.jl with Guillaume in the low-level packages like CTParser.jl.
Example: https://github.com/gdalle/SparseMatrixColorings.jl/blob/main/test/runtests.jl#L18-L20
Actually, we need a new version of CTParser, then I need to update CTDirect, CTFlows and finally OptimalControl.
Thanks for the tests on type instabilities. I will try that. However, to build our model, we have an intermediate structure called a PreModel and I already know that with this structure we have type instabilities.
Either we should only test CTDirect or test only parts of CTModels.
Actually, we need a new version of CTParser, then I need to update CTDirect, CTFlows and finally OptimalControl.
please review https://github.com/control-toolbox/CTParser.jl/pull/76#issuecomment-2920763425
I tried to JET.jl on your packages to isolate the issue and I see many sources of type instabilities. I suggest to add a unit like we did in SparseMatrixColorings.jl with Guillaume in the low-level packages like CTParser.jl.
Just to clarify, JET.test_package doesn't report type instabilities, only JET.test_opt does
I don't know if its help, but I have another example of this bug. I'm working with the following optimal control problem
t0 = 0.
vf = 20.
x0 = [0., 0., 0.1, 0.1]
p = [1., 1., 1., 1., 2., 1.5]
# Functions
e(x) = p[1] + x[3] + x[4]
g(x) = p[2]
f₁₁(x) = p[3]
f₁₂(x) = - p[4]*x[3]
f₂₁(x) = p[5]
f₂₂(x) = - p[6]*x[4]
# Vector fields
F0(x) = [e(x), 0. , 0.5*(f₁₁(x) + f₁₂(x)), 0.5*(f₂₁(x) + f₂₂(x)) ]
F1(x) = [0., g(x), 0.5*(f₁₁(x) - f₁₂(x)), 0.5*(f₂₁(x) - f₂₂(x)) ]
ocp = @def begin
tf ∈ R, variable
t ∈ [t0, tf], time
x = (e, v, r1, r2) ∈ R⁴, state
u ∈ R, control
-1 ≤ u(t) ≤ 1
x(t0) == x0
v(tf) == vf
ẋ(t) == F0(x(t)) + u(t)*F1(x(t))
e(tf) → min
end
sol_direct = solve(ocp; display = true)
plot(sol_direct)
which works well, but I try to write the flows F0 and F1 as the following
Ff(x) = [e(x), g(x), f₁₁(x), f₂₁(x)]
Fb(x) = [e(x), -g(x), f₁₂(x), f₂₂(x)]
F0(x) = Ff(x) + Fb(x)
F1(x) = Ff(x) - Fb(x)
and it leads to the same following error Cannot determine ordering of Dual tags ForwardDiff.Tag{...}. However, when I replace the , by ; in the flows
Ff(x) = [e(x); g(x); f₁₁(x); f₂₁(x)]
Fb(x) = [e(x); -g(x); f₁₂(x); f₂₂(x)]
F0(x) = Ff(x) + Fb(x)
F1(x) = Ff(x) - Fb(x)
it works. It's a bit weird from my point of view and I hope it will help.
@remydutto Thanks for the report. Weird...
@ocots @PierreMartinon no error when the constraint is scalarised. (and this definition also works with :exa as everything is scalar.)
julia> o = @def begin
v ∈ R, variable
t ∈ [0, 1], time
x ∈ R², state
u ∈ R, control
-1 ≤ v ≤ 1
x₁(0) == -1
x₂(0) - v == 0
x(1) == [0, 0]
∂(x₁)(t) == x₂(t)
∂(x₂)(t) == u(t)
∫( 0.5u(t)^2 ) → min
end
Current state of this issue as of October 2025
Apart from the weird example mentioned by @remydutto above, a systematic tweak to avoid issues with constants when using ADNLP modeller with forward over reverse is to add a trivial term to the expression, e.g. replace == 1 by == 1 + 0 * x1(t). this seems to enforce a correct behaviour of AD while using zero or one does not (check this).