Add ImportanceSamplingForwardPass
using SDDP
import HiGHS
import Random
model = SDDP.LinearPolicyGraph(;
stages = 3,
sense = :Min,
lower_bound = 0.0,
optimizer = HiGHS.Optimizer,
) do sp, t
@variable(sp, 5 <= x <= 15, SDDP.State, initial_value = 10)
@variable(sp, g_t >= 0)
@variable(sp, g_h >= 0)
@variable(sp, s >= 0)
@constraint(sp, balance, x.out - x.in + g_h + s == 0)
@constraint(sp, demand, g_h + g_t == 0)
@stageobjective(sp, s + t * g_t)
SDDP.parameterize(sp, [(0.0, 7.5), (3.0, 5.0), (10.0, 2.5)]) do w
set_normalized_rhs(balance, w[1])
set_normalized_rhs(demand, w[2])
return
end
return
end
Random.seed!(1234)
SDDP.train(
model;
log_every_iteration = true,
iteration_limit = 10,
risk_measure = SDDP.Entropic(1e-1),
forward_pass = SDDP.ImportanceSamplingForwardPass(),
cut_type = SDDP.MULTI_CUT,
)
Codecov Report
All modified and coverable lines are covered by tests :white_check_mark:
Project coverage is 96.63%. Comparing base (
f8a0c7c) to head (345fa91). Report is 1 commits behind head on master.
Additional details and impacted files
@@ Coverage Diff @@
## master #848 +/- ##
==========================================
+ Coverage 96.60% 96.63% +0.03%
==========================================
Files 27 27
Lines 3974 4016 +42
==========================================
+ Hits 3839 3881 +42
Misses 135 135
:umbrella: View full report in Codecov by Sentry.
:loudspeaker: Have feedback on the report? Share it here.
:rocket: New features to boost your workflow:
- :snowflake: Test Analytics: Detect flaky tests, report on failures, and find test suite problems.
- :package: JS Bundle Analysis: Save yourself from yourself by tracking and limiting bundle sizes in JS merges.
This is it: https://arxiv.org/abs/2307.13190 ?
Ah I knew there was something! A student of Dave is trying to do something similar.
💯 This was the next step I had in mind:
They're coming at it from a different direction/motivation, but it's the same core technique.
Hi Oscar,
For now, if my understanding is correct, the code for ImportanceSamplingForwardPass is like a skeleton? I notice the following code block:
for child in node.children
for noise in model[child.term]
push!(nominal_probability, child.probability * noise.probability)
push!(support, (child.term, noise.term))
end
end
The block above seems to be a standard forward pass. I think of a linear policy graph, the child is deterministic and we sample the noise using the original pmf. The missing part I need to add is the importance sampling pmf (for entropic measure) and weight adjustment for noise realization at every stage (or shall I record weight adjustment in the simulation part?)
This was the next step I had in mind: ...
I haven't update arXiv, but yet another alternative version for single cut is:
- cache the cuts separately, besides the aggregate (single) cuts
- after solving each optimization problem with single cut, check (apply optimized states) in the individual cuts which groups is "more active".
It is a bit weird in the sense that if you are storing all cuts individually, why not just do multicut? Maybe single cut is faster for that problem.
Yeah I thought of this. I initially considered looping through the cuts evaluating them before I realized I could reach in and find the right JuMP.value(theta).
I guess the point is that there is a lot of information in the disaggregated cuts that we are throwing away just for the sake of adding fewer constraints.
There are still some issues with this, but it's a useful start