Broadcast LinearOperators
What is the best way to broadcast LinearOperators? For instance,
Jx = jac_op(nlp, x) # where nlp is an AbstractNLPModel, and x a vector
and then
Jx .= jac_op(nlp, y)
You can't do that (at least not in the current state of things). Maybe we could cook up jac_op!(J, nlp, x).
I think that if you define Jx = jac_op(nlp, x), and change x afterward with .=, then Jx is changed automatically. At least for the default jac_op.
Is it? That wouldn't really be a desirable side effect.
Perhaps we could store the x value inside the operator? Maybe creating a HessianLinearOperator?
Up this conversation following the discussion https://github.com/JuliaSmoothOptimizers/NLPModels.jl/pull/416
What @abelsiqueira described
I think that if you define Jx = jac_op(nlp, x), and change x afterward with .=, then Jx is changed automatically. At least for the default jac_op.
is still true with LinearOperators v2.
Besides, I am not sure to see how we replace op.prod! because we cannot do
op.prod! = new_prod!
because of this
MethodError: Cannot `convert` an object of type NLPModels.var"#40#41"{Float64, SimpleNLPModel{Float64, Vector{Float64}}, Vector{Float64}, Vector{Float64}, typeof(*), typeof(+)} to an object of type NLPModels.var"#49#50"{Float64, SimpleNLPModel{Float64, Vector{Float64}}, Vector{Float64}, Vector{Float64}, typeof(*), typeof(+)}