Mathieu Besançon

Results 277 comments of Mathieu Besançon

I'll try to find examples in Manopt using it when this becomes relevant

Once #121 is merged I'll look into it, but the iterate in BCG is already in-memory (it exists only within the active set).

OK yes. Even with the alternative version, each iteration can control the batch size by picking the size of the `xs` list that is passed to `{f/g}_batched`

So for now at the FW function level we have this: ```julia compute_gradient(f, x, rng=rng, batch_size=batch_size) ``` At the `compute_gradient` level for `f::StochasticObjective`: ```julia rand_indices = if full_evaluation eachindex(f.xs) else...

Looking at it, the most promising improvement I see: we are operating on ``` d = sum lambda_k (v_k-x) ``` We could instead operate on: ``` d = sum lambda_k...

not sure I follow, if we set `z[j]` to binary, how do you check that the user has done it themselves? Does the optimizer have to keep another copy? Or...

I can't seem to reproduce the incident anymore, neither with JuMP nor MOI directly. It is very likely still there but makes it tricky to test

Isn't there the raw wrapper? https://github.com/scipopt/SCIP.jl/blob/master/src/wrapper/pub_pricer.jl Note: this means you would have to use it in a fairly raw interface with managed-scip instead of the MOI wrapper

> Not sure if we can actually show the full stacktrace (including the SCIP calls). I believe this requires compiling SCIP in debug mode? I'm not sure we will get...

OK I see. So the goal would be to print the code before throwing an error instead of the assert in the macro?