Tangi Migot
Tangi Migot
This is the function that needs to be modified: https://github.com/JuliaSmoothOptimizers/NLPModels.jl/blob/3b5ec8eedf78eaa609447456f69e01f55f0209ca/src/nlp/show.jl#L71
See https://github.com/JuliaSmoothOptimizers/NLPModels.jl/pull/399#issue-1224201189
Following #379 , the implementation of `jtprod` [L. 475](https://github.com/JuliaSmoothOptimizers/NLPModels.jl/blob/289b81ceea0961cb60a6e536d32311114b09405b/src/nlp/api.jl#L475) allocate by default when the problem has linear and nonlinear constraints. The optimal way is to re-implement this method when creating...
This is more a question than an issue, but I am wondering if there are plan to also handle sparse jacobian?
What is the best way to broadcast `LinearOperators`? For instance, ``` Jx = jac_op(nlp, x) # where nlp is an AbstractNLPModel, and x a vector ``` and then ``` Jx...
I used to be able to compute the derivative of the residual with respect to parameters with a small twick of the jacobian function within Gridap [the old solution under...
It is often convenient for such large-scale problems to evaluate matrix-vector products instead of evaluating the whole matrix every time. Have you considered using automatic differentiation for such operations?
What would be the best reference to cite this package? :)
This is an example of how we could use ADNLSModel for the least square objective. Currently, the tests break because in the JuMP models currently implemented we generally don't have...