SDDP.jl icon indicating copy to clipboard operation
SDDP.jl copied to clipboard

Ideas to improve multistage stochastic integer programs

Open odow opened this issue 4 years ago • 9 comments

Motivation

The implementation of SDDiP in SDDP.jl is bad. If you use it, you will probably run into convergence issues! No need to open an issue, I'm aware of the problem. (If you have challenging multistage stochastic integer problems, please send them to me!)

At this point it isn't apparent whether SDDiP is a bad algorithm or a poor implementation in SDDP.jl.

My guess is that it is a bad algorithm almost results in complete enumeration for most problems, but in some cases we can do okay if we implement the following algorithmic improvements.

Clever switching between duality handlers

  • We should write a handler that:
    • starts with ContinuousConicDuality()
    • if MIP exists, periodically adds a StrengthenendConicDuality()
    • Switches to StrengthenendConicDuality() if the improvement was more than could be expected in the same time as ContinuousConicDuality()
    • Does the same for StrengthenendConicDuality -> LagrangianDuality

This should mean that it can be the default for all cases. LP has no performance loss. MIP can only be better, and we don't waste time with the strengthened solve if the problem is naturally integer, or with the Lagrangian solve if it takes too long to solve.

Bounded states

We need bounds on x.in to ensure the dual has a bounded solution. But it isn't obvious what or how to do this.

Advanced Lagrangian solves

The current implementation of the cutting plane solve is very naive. From Jim's slides: https://www.ima.umn.edu/materials/2015-2016/ND8.1-12.16/25411/Luedtke-mip-decomp.pdf image

  • This paper by Jim and his student looks good for improving the SDDiP cuts: https://arxiv.org/pdf/2106.04023.pdf
    • The key idea is to bound the Lagrangian duals by some set P, formed by the convex hull of the LP duals found to date. (This reduces the solution time of the Lagrangian cutting plane solve, at the expense of poorer duals early in the solve.)

Other

  • Argonne folks are working on dual-decomposition based stuff: https://github.com/odow/SDDP.jl/issues/441, https://github.com/hideakiv/DRMSMIP.jl, https://github.com/kibaekkim/DualDecomposition.jl/pull/36
  • A user sent me a model (in my emails as "Log file (SDDP issue)") that performs very badly in the default settings of SDDP.jl. It's also not a good candidate for SDDiP because the integrality is in the control variables, not the state variables.

Done

  • [x] remove the Bellman function stuff and just hard-code one value function with a single cutting loop.
    • Here was a previous attempt: https://github.com/odow/SDDP.jl/pull/265. Before writing new code, go read it because it had a number of good ideas.
  • [x] Decouple state binarization from SDDiP cutting loop. They are independent ideas.
    • SDDiP still works with a mix of integer and continuous state variables, it just may result in a suboptimal policy. Pure binary is a special case with guaranteed convergence.
  • [x] Add strengthened benders cuts: https://github.com/odow/SDDP.jl/issues/257
  • [x] Unify tolerances: https://github.com/odow/SDDP.jl/issues/264
  • [x] Write documentation for the features introduced by #237.
  • [x] #455 fixes this by adding a solve for the smallest L1-norm dual solution Magnanti-Wong stabilisation to some interior point (e.g., from solving the LP). This is particularly important for the 0-1 case where all the points we evaluate are at the edge of the feasible region => very steep duals are optimal.
  • [x] Give up on convergence, and present SDDP as a good heuristic for finding optimal control policies.
    • We can find feasible (near optimal) policies for risk-averse infinite horizon, stochastic optimal control problems with continuous and discrete variables! This should be something the community advocates, instead of saying we can't prove optimality and things are too hard. Current state-of-the-art in POMDP-type fields is just to come up with a heuristic and say "wow, it works on a test problem."

Write a theory tutorial

I should write a tutorial for the documentation.

I find how SDDiP is presented as a concept in the literature confusing.

We have these subproblems: image and we're trying to find a feasible dual solution and corresponding dual objective value for the slope and intercept of the cut respectively.

  • ContinuousConic relaxes the integrality and solves the conic dual https://odow.github.io/SDDP.jl/stable/apireference/#SDDP.ContinuousConicDuality
  • Strengthened conic duality uses the same dual solution vector, but strengthens it by computing a tighter dual objective value given that dual solution https://odow.github.io/SDDP.jl/stable/apireference/#SDDP.StrengthenedConicDuality
  • LagrangianDuality relaxes the fishing constraint (x-x=0) to compute the Lagrangian dual solution and corresponding objective value: https://odow.github.io/SDDP.jl/stable/apireference/#SDDP.LagrangianDuality

You should probably be able to find a feasible solution with the default ContinuousConicDuality option as well. I changed how we handle integer variables in the forward pass during training.

odow avatar Aug 26 '19 13:08 odow

@haoxiangyang89 #265 had some of the ideas we discussed the other day.

odow avatar Feb 01 '21 02:02 odow

@guyichen09 (cc @mortondp) a use-case for Bayesian Optimization/multi-armed bandit stuff:

Each iteration of SDDP.jl, we have three choices for our duality handler

  • ContinuousConicDuality (a.k.a. continuous benders)
  • StrengthenedConicDuality (a.k.a. strengthened benders)
  • LagrangianDuality (a.k.a. SDDiP)

Each duality handler will lead to cuts that improve our lower bound by some amount, but take a different amount of time. You could normalize each iteration to a reward function of (change in objective / time taken). This is stochastic, and it changes over time.

I want to write a BanditDuality that intelligently picks which handler to run over the iterations. Seems like this is pretty do-able?

odow avatar Sep 07 '21 19:09 odow

Hi Oscar,

I have an idea for this BanditDuailty. I think that the multi-armed bandit algorithm might be a good method for this. We could use a modified version of Thompson Sampling. We would need to modify the reward function to take into account both the change in objective and time taken. What do you think?

Best, Guyi

On Tue, Sep 7, 2021 at 2:56 PM Oscar Dowson @.***> wrote:

@guyichen09 https://github.com/guyichen09 (cc @mortondp https://github.com/mortondp) a use-case for Bayesian Optimization/multi-armed bandit stuff:

Each iteration of SDDP.jl, we have three choices for our duality handler

  • ContinuousConicDuality (a.k.a. continuous benders)
  • StrengthenedConicDuality (a.k.a. strengthened benders)
  • LagrangianDuality (a.k.a. SDDiP)

Each duality handler will lead to cuts that improve our lower bound by some amount, but take a different amount of time. You could normalize each iteration to a reward function of (change in objective / time taken). This is stochastic, and it changes over time.

I want to write a BanditDuality that intelligently picks which handler to run over the iterations. Seems like this is pretty do-able?

— You are receiving this because you were mentioned. Reply to this email directly, view it on GitHub https://github.com/odow/SDDP.jl/issues/246#issuecomment-914583259, or unsubscribe https://github.com/notifications/unsubscribe-auth/APSLDE4CZ6X35M6DDVH47QLUAZU4TANCNFSM4IPP3Y5A . Triage notifications on the go with GitHub Mobile for iOS https://apps.apple.com/app/apple-store/id1477376905?ct=notification-email&mt=8&pt=524675 or Android https://play.google.com/store/apps/details?id=com.github.android&referrer=utm_campaign%3Dnotification-email%26utm_medium%3Demail%26utm_source%3Dgithub.

guyichen09 avatar Sep 07 '21 21:09 guyichen09

Yeah my suggestion for the reward would be

function reward(log::Vector{Log})
    d_bound = abs(log[end].bound - log[end-1].bound)
    dt = log[end].time - log[end-1].time
    return d_bound / dt
end

And the log is https://github.com/odow/SDDP.jl/blob/ad7a4183895a81afda8ed303389d7d3c92bd58b5/src/algorithm.jl#L938

There's a little bit of plumbing to do, but not too much work.

We should pass prepare_backward_pass(model, options.duality_handler) to prepare_backward_pass(model, options.duality_handler, options.log) https://github.com/odow/SDDP.jl/blob/ad7a4183895a81afda8ed303389d7d3c92bd58b5/src/algorithm.jl#L472 which gets updated here: https://github.com/odow/SDDP.jl/blob/ad7a4183895a81afda8ed303389d7d3c92bd58b5/src/plugins/duality_handlers.jl#L64-L71 here https://github.com/odow/SDDP.jl/blob/ad7a4183895a81afda8ed303389d7d3c92bd58b5/src/plugins/duality_handlers.jl#L131-L136 and here https://github.com/odow/SDDP.jl/blob/ad7a4183895a81afda8ed303389d7d3c92bd58b5/src/plugins/headers.jl#L185-L192

Then we can write a new method for BanditDuality.

It should probably start with a minimum of say, 10 iterations of ContinuousConicDuality to build a prior. We could then do a hack and use initial priors of say, 2/3 the variance for StrengthenedConicDuality and 1/3 the variance for LagrangianDuality (i.e., just mean-revert the realizations by some amount).

odow avatar Sep 07 '21 21:09 odow

We could use this bayesian learning in a few other contexts:

Trajectory depth in cyclic graphs

The default sampling scheme for the forward pass has some tunable parameters: https://github.com/odow/SDDP.jl/blob/493b48e256a88cb1cc898497d01a4d3245d3ffd0/src/plugins/sampling_schemes.jl#L38-L43

In particular, for cyclic graphs, you might want to learn between

A = InSampleMonteCarlo(
    max_depth = length(model.nodes),
    terminate_on_dummy_leaf = false,
)

B = InSampleMonteCarlo(
    max_depth = 5 * length(model.nodes),
    terminate_on_dummy_leaf = false,
)

C = InSampleMonteCarlo()

Revisiting forward passes

Rather than just looping through forward passes: https://github.com/odow/SDDP.jl/blob/493b48e256a88cb1cc898497d01a4d3245d3ffd0/src/plugins/forward_passes.jl#L136-L158 we should learn which ones to revisit according to some score. So you look at the delta achieved by each forward pass, as well as the expected delta of sampling a new trajectory. And then pick the one that has the best.

I think this is both computationally efficient, and helpful from a marketing aspect, because we can claim machine learning is being used to drive decision making within the algorithm.

odow avatar Sep 18 '21 05:09 odow

@guyichen09 here's a potentially open question: is there any work on learning when the reward is not-stationary?

In our case, we know that the reward for each arm will tend towards 0 as it gets pulled. The history is also action-dependent, which means assumptions about the distribution of reward being independent of the algorithm are not satisfied.

Some papers:

  • https://proceedings.neurips.cc/paper/2014/file/903ce9225fca3e988c2af215d4e544d3-Paper.pdf
  • https://arxiv.org/pdf/1904.07272.pdf

odow avatar Sep 19 '21 09:09 odow

I’m with you. Good idea.

When Daniel was looking at DRO / SDDP we looked at sampling forward paths according to the nominal distribution or according to the DRO distribution. In theory, the latter can get you in trouble with respect to not converging due to zero-probability branches. We did a convex combo / mixture. Lower bounds were significantly tighter when putting enough weight on DRO paths, meaning that I think your idea would make sense, at least according to some metrics. We also looked at out-of-sample quality of the resulting policy. That difference was less dramatic, but there may have been something there, too. Can’t remember for sure.

Dave

On Sep 18, 2021, at 12:38 AM, Oscar Dowson @.@.>> wrote:

We could use this bayesian learning in a few other contexts:

Trajectory depth in cyclic graphs

The default sampling scheme for the forward pass has some tunable parameters: https://github.com/odow/SDDP.jl/blob/493b48e256a88cb1cc898497d01a4d3245d3ffd0/src/plugins/sampling_schemes.jl#L38-L43https://urldefense.com/v3/__https://github.com/odow/SDDP.jl/blob/493b48e256a88cb1cc898497d01a4d3245d3ffd0/src/plugins/sampling_schemes.jl*L38-L43__;Iw!!Dq0X2DkFhyF93HkjWTBQKhk!CmdXgj4D2n2FtwVS04NfPQw0OYHLBGX88OFuL6YlBabRpa4NuF-vNmRLDT4MwHTLxMONAhXHRw$

In particular, for cyclic graphs, you might want to learn between

A = InSampleMonteCarlo( max_depth = length(model.nodes), terminate_on_dummy_leaf = false, )

B = InSampleMonteCarlo( max_depth = 5 * length(model.nodes), terminate_on_dummy_leaf = false, )

C = InSampleMonteCarlo()

Revisiting forward passes

Rather than just looping through forward passes: https://github.com/odow/SDDP.jl/blob/493b48e256a88cb1cc898497d01a4d3245d3ffd0/src/plugins/forward_passes.jl#L136-L158https://urldefense.com/v3/__https://github.com/odow/SDDP.jl/blob/493b48e256a88cb1cc898497d01a4d3245d3ffd0/src/plugins/forward_passes.jl*L136-L158__;Iw!!Dq0X2DkFhyF93HkjWTBQKhk!CmdXgj4D2n2FtwVS04NfPQw0OYHLBGX88OFuL6YlBabRpa4NuF-vNmRLDT4MwHTLxMP2JH-UPA$ we should learn which ones to revisit according to some score. So you look at the delta achieved by each forward pass, as well as the expected delta of sampling a new trajectory. And then pick the one that has the best.

I think this is both computationally efficient, and helpful from a marketing aspect, because we can claim machine learning is being used to drive decision making within the algorithm.

— You are receiving this because you were mentioned. Reply to this email directly, view it on GitHubhttps://urldefense.com/v3/__https://github.com/odow/SDDP.jl/issues/246*issuecomment-922196850__;Iw!!Dq0X2DkFhyF93HkjWTBQKhk!CmdXgj4D2n2FtwVS04NfPQw0OYHLBGX88OFuL6YlBabRpa4NuF-vNmRLDT4MwHTLxMP9GOiq6g$, or unsubscribehttps://urldefense.com/v3/__https://github.com/notifications/unsubscribe-auth/AJBB7ENN5YS5NFH32IGZF5LUCQQV7ANCNFSM4IPP3Y5A__;!!Dq0X2DkFhyF93HkjWTBQKhk!CmdXgj4D2n2FtwVS04NfPQw0OYHLBGX88OFuL6YlBabRpa4NuF-vNmRLDT4MwHTLxMNv02dObg$. Triage notifications on the go with GitHub Mobile for iOShttps://urldefense.com/v3/__https://apps.apple.com/app/apple-store/id1477376905?ct=notification-email&mt=8&pt=524675__;!!Dq0X2DkFhyF93HkjWTBQKhk!CmdXgj4D2n2FtwVS04NfPQw0OYHLBGX88OFuL6YlBabRpa4NuF-vNmRLDT4MwHTLxMO1wGwTQA$ or Androidhttps://urldefense.com/v3/__https://play.google.com/store/apps/details?id=com.github.android&referrer=utm_campaign*3Dnotification-email*26utm_medium*3Demail*26utm_source*3Dgithub__;JSUlJSU!!Dq0X2DkFhyF93HkjWTBQKhk!CmdXgj4D2n2FtwVS04NfPQw0OYHLBGX88OFuL6YlBabRpa4NuF-vNmRLDT4MwHTLxMNQG9lXCw$.

mortondp avatar Sep 19 '21 16:09 mortondp

Hi Oscar,

I am not sure if there is any work about non-stationary rewards. During my internship, I found an algorithm using Thompson Sampling for the combinatorial multi-armed bandit problem. In this algorithm, the reward is deterministic in each pull during an iteration; however, from entire algorithm iteration to iteration, the reward is changing and tends to go to zero. They use a Bernoulli random variable to account for that. The specific algorithm is attached below. [image: Screen Shot 2021-09-24 at 11.16.09 AM.png]

Line 9 - 10 is where they deal with the reward.

I think it is a possible option for what we are looking for. I will also research more on this topic.

Best,

Guyi

On Sun, Sep 19, 2021 at 11:24 AM mortondp @.***> wrote:

I’m with you. Good idea.

When Daniel was looking at DRO / SDDP we looked at sampling forward paths according to the nominal distribution or according to the DRO distribution. In theory, the latter can get you in trouble with respect to not converging due to zero-probability branches. We did a convex combo / mixture. Lower bounds were significantly tighter when putting enough weight on DRO paths, meaning that I think your idea would make sense, at least according to some metrics. We also looked at out-of-sample quality of the resulting policy. That difference was less dramatic, but there may have been something there, too. Can’t remember for sure.

Dave

On Sep 18, 2021, at 12:38 AM, Oscar Dowson @.@.>> wrote:

We could use this bayesian learning in a few other contexts:

Trajectory depth in cyclic graphs

The default sampling scheme for the forward pass has some tunable parameters:

https://github.com/odow/SDDP.jl/blob/493b48e256a88cb1cc898497d01a4d3245d3ffd0/src/plugins/sampling_schemes.jl#L38-L43 < https://urldefense.com/v3/https://github.com/odow/SDDP.jl/blob/493b48e256a88cb1cc898497d01a4d3245d3ffd0/src/plugins/sampling_schemes.jl*L38-L43;Iw!!Dq0X2DkFhyF93HkjWTBQKhk!CmdXgj4D2n2FtwVS04NfPQw0OYHLBGX88OFuL6YlBabRpa4NuF-vNmRLDT4MwHTLxMONAhXHRw$>

In particular, for cyclic graphs, you might want to learn between

A = InSampleMonteCarlo( max_depth = length(model.nodes), terminate_on_dummy_leaf = false, )

B = InSampleMonteCarlo( max_depth = 5 * length(model.nodes), terminate_on_dummy_leaf = false, )

C = InSampleMonteCarlo()

Revisiting forward passes

Rather than just looping through forward passes:

https://github.com/odow/SDDP.jl/blob/493b48e256a88cb1cc898497d01a4d3245d3ffd0/src/plugins/forward_passes.jl#L136-L158 < https://urldefense.com/v3/https://github.com/odow/SDDP.jl/blob/493b48e256a88cb1cc898497d01a4d3245d3ffd0/src/plugins/forward_passes.jl*L136-L158;Iw!!Dq0X2DkFhyF93HkjWTBQKhk!CmdXgj4D2n2FtwVS04NfPQw0OYHLBGX88OFuL6YlBabRpa4NuF-vNmRLDT4MwHTLxMP2JH-UPA$>

we should learn which ones to revisit according to some score. So you look at the delta achieved by each forward pass, as well as the expected delta of sampling a new trajectory. And then pick the one that has the best.

I think this is both computationally efficient, and helpful from a marketing aspect, because we can claim machine learning is being used to drive decision making within the algorithm.

— You are receiving this because you were mentioned. Reply to this email directly, view it on GitHub< https://urldefense.com/v3/https://github.com/odow/SDDP.jl/issues/246*issuecomment-922196850;Iw!!Dq0X2DkFhyF93HkjWTBQKhk!CmdXgj4D2n2FtwVS04NfPQw0OYHLBGX88OFuL6YlBabRpa4NuF-vNmRLDT4MwHTLxMP9GOiq6g$>, or unsubscribe< https://urldefense.com/v3/https://github.com/notifications/unsubscribe-auth/AJBB7ENN5YS5NFH32IGZF5LUCQQV7ANCNFSM4IPP3Y5A;!!Dq0X2DkFhyF93HkjWTBQKhk!CmdXgj4D2n2FtwVS04NfPQw0OYHLBGX88OFuL6YlBabRpa4NuF-vNmRLDT4MwHTLxMNv02dObg$>.

Triage notifications on the go with GitHub Mobile for iOS< https://urldefense.com/v3/https://apps.apple.com/app/apple-store/id1477376905?ct=notification-email&mt=8&pt=524675;!!Dq0X2DkFhyF93HkjWTBQKhk!CmdXgj4D2n2FtwVS04NfPQw0OYHLBGX88OFuL6YlBabRpa4NuF-vNmRLDT4MwHTLxMO1wGwTQA$> or Android< https://urldefense.com/v3/https://play.google.com/store/apps/details?id=com.github.android&referrer=utm_campaign3Dnotification-email26utm_medium3Demail26utm_source*3Dgithub;JSUlJSU!!Dq0X2DkFhyF93HkjWTBQKhk!CmdXgj4D2n2FtwVS04NfPQw0OYHLBGX88OFuL6YlBabRpa4NuF-vNmRLDT4MwHTLxMNQG9lXCw$>.

— You are receiving this because you were mentioned. Reply to this email directly, view it on GitHub https://github.com/odow/SDDP.jl/issues/246#issuecomment-922499937, or unsubscribe https://github.com/notifications/unsubscribe-auth/APSLDE4A5GOLMVTIPJR56R3UCYFEZANCNFSM4IPP3Y5A . Triage notifications on the go with GitHub Mobile for iOS https://apps.apple.com/app/apple-store/id1477376905?ct=notification-email&mt=8&pt=524675 or Android https://play.google.com/store/apps/details?id=com.github.android&referrer=utm_campaign%3Dnotification-email%26utm_medium%3Demail%26utm_source%3Dgithub.

guyichen09 avatar Sep 24 '21 16:09 guyichen09

A few updates for this:

  • The BanditDuality is implemented:

https://github.com/odow/SDDP.jl/blob/23fcaa9085e6ec43e1e0423b4e64c078951e8101/src/plugins/duality_handlers.jl#L310-L412

  • I spent a fair bit of time getting a better understanding of the Lagrangian problem. We now just use a modified BFGS to solve it. I don't know why people do other more complicated things. The nice thing about BFGS is that each step can maintain dual feasibility, so we can terminate with a suboptimal dual, and it doesn't have any of the numerical issues associated with solving a LP or QP on the side. In general though, I don't think we should ever really use Lagrangian. It sucks.

https://github.com/odow/SDDP.jl/blob/23fcaa9085e6ec43e1e0423b4e64c078951e8101/src/plugins/local_improvement_search.jl#L23-L130

odow avatar Nov 08 '21 07:11 odow

Closing because I think this is a dead end. The cost of Lagrangian duality just doesn't improve things.

I've seen a few papers recently claiming to "do SDDiP" but they have mixed-integer-continuous state and control variables, and then sneakily say that "we just use the continuous dual, it might give a worse bound but it works well." And it does. So we don't need this.

odow avatar Dec 14 '22 00:12 odow