NLPModels.jl
NLPModels.jl copied to clipboard
replace `@.` in `hess_op`
using NLPModels, NLPModelsTest # https://github.com/JuliaSmoothOptimizers/NLPModelsTest.jl/pull/60
nlp = BROWNDEN()
print_nlp_allocations(nlp, test_allocs_nlpmodels(nlp))
returns
Problem name: BROWNDEN_manual
obj: ⋅⋅⋅⋅⋅⋅⋅⋅⋅⋅⋅⋅⋅⋅⋅⋅⋅⋅⋅⋅ 0.0
hess_op!: ████████████████████ 1344.0
hess_coord!: ⋅⋅⋅⋅⋅⋅⋅⋅⋅⋅⋅⋅⋅⋅⋅⋅⋅⋅⋅⋅ 0.0
grad!: ⋅⋅⋅⋅⋅⋅⋅⋅⋅⋅⋅⋅⋅⋅⋅⋅⋅⋅⋅⋅ 0.0
hess_structure!: ⋅⋅⋅⋅⋅⋅⋅⋅⋅⋅⋅⋅⋅⋅⋅⋅⋅⋅⋅⋅ 0.0
hprod!: ⋅⋅⋅⋅⋅⋅⋅⋅⋅⋅⋅⋅⋅⋅⋅⋅⋅⋅⋅⋅ 0.0
hess_op_prod!: ██⋅⋅⋅⋅⋅⋅⋅⋅⋅⋅⋅⋅⋅⋅⋅⋅⋅⋅ 96.0
and after this PR, it returns
Problem name: BROWNDEN_manual
obj: ⋅⋅⋅⋅⋅⋅⋅⋅⋅⋅⋅⋅⋅⋅⋅⋅⋅⋅⋅⋅ 0.0
hess_op!: ████████████████████ 1248.0
hess_coord!: ⋅⋅⋅⋅⋅⋅⋅⋅⋅⋅⋅⋅⋅⋅⋅⋅⋅⋅⋅⋅ 0.0
grad!: ⋅⋅⋅⋅⋅⋅⋅⋅⋅⋅⋅⋅⋅⋅⋅⋅⋅⋅⋅⋅ 0.0
hess_structure!: ⋅⋅⋅⋅⋅⋅⋅⋅⋅⋅⋅⋅⋅⋅⋅⋅⋅⋅⋅⋅ 0.0
hprod!: ⋅⋅⋅⋅⋅⋅⋅⋅⋅⋅⋅⋅⋅⋅⋅⋅⋅⋅⋅⋅ 0.0
hess_op_prod!: ██⋅⋅⋅⋅⋅⋅⋅⋅⋅⋅⋅⋅⋅⋅⋅⋅⋅⋅ 96.0
We can do the same for jac_op
, jac_lin_op
, and jac_nln_op
.
@geoffroyleconte @amontoison @dpo @abelsiqueira I am slightly surprised the hess_op!
allocates and using the resulting linear operator for products also allocates. Any idea, where this is coming from? And, can we improve?
Codecov Report
Base: 99.49% // Head: 99.49% // No change to project coverage :thumbsup:
Coverage data is based on head (
d1f7308
) compared to base (f9388ab
). Patch coverage: 100.00% of modified lines in pull request are covered.
Additional details and impacted files
@@ Coverage Diff @@
## main #416 +/- ##
=======================================
Coverage 99.49% 99.49%
=======================================
Files 13 13
Lines 786 786
=======================================
Hits 782 782
Misses 4 4
Impacted Files | Coverage Δ | |
---|---|---|
src/nlp/api.jl | 99.75% <100.00%> (ø) |
Help us with your feedback. Take ten seconds to tell us how you rate us. Have a feature suggestion? Share it here.
:umbrella: View full report at Codecov.
:loudspeaker: Do you have feedback about the report comment? Let us know in this issue.
@tmigot Can you try this function to see if it allocates please.
function hess_op!(
nlp::AbstractNLPModel,
x::AbstractVector,
Hv::AbstractVector,
obj_weight::Real = one(eltype(x)),
)
@lencheck nlp.meta.nvar x Hv
prod = (res, v) -> hprod!(nlp, x, v, res; obj_weight = obj_weight)
return LinearOperator{eltype(x)}(nlp.meta.nvar, nlp.meta.nvar, true, true, prod, prod, prod)
end
You allocate an array here: https://github.com/JuliaSmoothOptimizers/NLPModels.jl/pull/416/files#diff-3d235ab1458b43a45704d5735013ca8e8a94befce081da5f56722f0b95f71576R1141
@tmigot Can you try this function to see if it allocates please.
function hess_op!( nlp::AbstractNLPModel, x::AbstractVector, Hv::AbstractVector, obj_weight::Real = one(eltype(x)), ) @lencheck nlp.meta.nvar x Hv prod = (res, v) -> hprod!(nlp, x, v, res; obj_weight = obj_weight) return LinearOperator{eltype(x)}(nlp.meta.nvar, nlp.meta.nvar, true, true, prod, prod, prod) end
Problem name: BROWNDEN_manual
obj: ⋅⋅⋅⋅⋅⋅⋅⋅⋅⋅⋅⋅⋅⋅⋅⋅⋅⋅⋅⋅ 0.0
hess_op!: ████████████████████ 1136.0
hess_coord!: ⋅⋅⋅⋅⋅⋅⋅⋅⋅⋅⋅⋅⋅⋅⋅⋅⋅⋅⋅⋅ 0.0
grad!: ⋅⋅⋅⋅⋅⋅⋅⋅⋅⋅⋅⋅⋅⋅⋅⋅⋅⋅⋅⋅ 0.0
hess_structure!: ⋅⋅⋅⋅⋅⋅⋅⋅⋅⋅⋅⋅⋅⋅⋅⋅⋅⋅⋅⋅ 0.0
hprod!: ⋅⋅⋅⋅⋅⋅⋅⋅⋅⋅⋅⋅⋅⋅⋅⋅⋅⋅⋅⋅ 0.0
hess_op_prod!: ██⋅⋅⋅⋅⋅⋅⋅⋅⋅⋅⋅⋅⋅⋅⋅⋅⋅⋅ 96.0
You allocate an array here: https://github.com/JuliaSmoothOptimizers/NLPModels.jl/pull/416/files#diff-3d235ab1458b43a45704d5735013ca8e8a94befce081da5f56722f0b95f71576R1141
I am not sure to see which vector we allocate there
The creation of a linear operator structure just allocates maybe and it can't be avoided.
Hm, not sure why my link didn't work. I mean that hprod!() without the multipliers calls hprod!() with multipliers by allocating a vector of zero multipliers: https://github.com/JuliaSmoothOptimizers/NLPModels.jl/blob/main/src/nlp/api.jl#L1141. Could that be it?
The creation of a linear operator structure just allocates maybe and it can't be avoided.
That's probably true. And in fact I would expect hess_op!()
to take an operator as argument and modify its prod
, tprod
, and ctprod
.
Hm, not sure why my link didn't work. I mean that hprod!() without the multipliers calls hprod!() with multipliers by allocating a vector of zero multipliers: https://github.com/JuliaSmoothOptimizers/NLPModels.jl/blob/main/src/nlp/api.jl#L1141. Could that be it?
You are right, this allocates in general. But not here as we now redefine hprod
in the test problems of NLPModelsTest https://github.com/JuliaSmoothOptimizers/NLPModelsTest.jl/blob/32ff072119733adc0c8637c2cc8380d108108939/src/nlp/problems/brownden.jl#L107
The creation of a linear operator structure just allocates maybe and it can't be avoided.
Okay, but still what is strange is that computing a product H * v
also allocates (this is hess_op_prod!
)
Ok. Did you confirm that https://github.com/JuliaSmoothOptimizers/NLPModelsTest.jl/blob/32ff072119733adc0c8637c2cc8380d108108939/src/nlp/problems/brownden.jl#L107 does not allocate?
Ok. Did you confirm that https://github.com/JuliaSmoothOptimizers/NLPModelsTest.jl/blob/32ff072119733adc0c8637c2cc8380d108108939/src/nlp/problems/brownden.jl#L107 does not allocate?
Yes, I updated https://github.com/JuliaSmoothOptimizers/NLPModelsTest.jl/pull/60#issuecomment-1231887261
All the allocations are in LinearOperators.jl. The main issue is this line: https://github.com/JuliaSmoothOptimizers/LinearOperators.jl/blob/main/src/abstract.jl#L75 We also always allocate two empty vectors: https://github.com/JuliaSmoothOptimizers/LinearOperators.jl/blob/main/src/abstract.jl#L74
using NLPModelsTest
nlp = eval(Meta.parse(NLPModelsTest.nlp_problems[1]))()
test_allocs_nlpmodels(nlp)
using Profile, PProf
Profile.Allocs.clear()
Profile.Allocs.@profile sample_rate=1 test_allocs_nlpmodels(nlp)
PProf.Allocs.pprof(from_c = false)
All the allocations are in LinearOperators.jl. The main issue is this line: https://github.com/JuliaSmoothOptimizers/LinearOperators.jl/blob/main/src/abstract.jl#L75 We also always allocate two empty vectors: https://github.com/JuliaSmoothOptimizers/LinearOperators.jl/blob/main/src/abstract.jl#L74
I would say that those allocations are somewhat expected. This is the reason why I think hess_op!()
needs to modify the operator in place, instead of creating a new one.