Oscar Dowson
Oscar Dowson
Closing because I think this is a dead end. The cost of Lagrangian duality just doesn't improve things. I've seen a few papers recently claiming to "do SDDiP" but they...
@adow031 worked around this by adding a new sampling scheme and a new forward pass: https://github.com/EPOC-NZ/JADE.jl/issues/19.
Closing in favor of #496
From #516 ``` Doubts from the Vector auto-regressive model, model = SDDP.LinearPolicyGraph( stages = 3, sense = :Min, lower_bound = 0.0, optimizer = GLPK.Optimizer, ) do sp, t @variable(sp, 0...
This is really asking for a function to compute the convex relaxation of an arbitrary JuMP model. But that's a different ballpark. Instead of having separate models, we should really...
A good way to get started is a function that implements Andrew's algorithm. Pseudo code would look like: ```julia nonconvex_model = SDDP.PolicyGraph(...) convex_model = SDDP.PolicyGraph(...) function train_nonconvex(nonconvex_model, convex_model) while _...
We actually probably have enough code in the asynchronous stuff to make this work: https://github.com/odow/SDDP.jl/blob/a382ea96a6531c774eafa6fa0e73dbede64e83a6/src/plugins/parallel_schemes.jl#L135 https://github.com/odow/SDDP.jl/blob/a382ea96a6531c774eafa6fa0e73dbede64e83a6/src/plugins/parallel_schemes.jl#L167-L187 What about something like: ```julia nonconvex_model = SDDP.PolicyGraph(...) convex_model = SDDP.PolicyGraph(...) function train_nonconvex(nonconvex_model, convex_model)...
I guess I haven't really thought through the details. It's probable that it's a lot more work than I'm thinking.
Here you go: https://github.com/odow/SDDP.jl/pull/611
It only took us four years, but we got there...