SDDP.jl
SDDP.jl copied to clipboard
SDDP with markov chain
hi Oscar
I have been solving the SDDP with MC in a serial way, I wanted to do it using multi.cuts but I get an error
--------------------------------------------------------------------------------
SDDP.jl (c) Oscar Dowson, 2017-20
Solver: serial mode
Iteration Simulation Bound Time (s) Proc. ID # Solves
1 3.616467e+07 1.903413e+03 2.163500e+01 1 745
2 3.021835e+07 1.903413e+03 2.887300e+01 1 1490
3 3.088366e+07 1.903413e+03 4.025500e+01 1 2235
ERROR: LoadError: Expected 140 local θ variables but there were 180.
Stacktrace:
[1] error(::String) at .\error.jl:33
[2] _add_locals_if_necessary(::SDDP.Node{Tuple{Int64,Int64}}, ::SDDP.BellmanFunction, ::Int64) at C:\Users\esnil\.julia\packages\SDDP\tCaKk\src\plugins\bellman_functions.jl:463
[3] refine_bellman_function(::SDDP.PolicyGraph{Tuple{Int64,Int64}}, ::SDDP.Node{Tuple{Int64,Int64}}, ::SDDP.BellmanFunction, ::SDDP.Expectation, ::Dict{Symbol,Float64}, ::Array{Dict{Symbol,Float64},1}, ::Array{SDDP.Noise,1}, ::Array{Float64,1}, ::Array{Float64,1}) at C:\Users\esnil\.julia\packages\SDDP\tCaKk\src\plugins\bellman_functions.jl:348
[4] backward_pass(::SDDP.PolicyGraph{Tuple{Int64,Int64}}, ::SDDP.Options{Tuple{Int64,Int64}}, ::Array{Tuple{Tuple{Int64,Int64},Any},1}, ::Array{Dict{Symbol,Float64},1}, ::Array{Tuple{},1}, ::Array{Tuple{Int64,Dict{Tuple{Int64,Int64},Float64}},1}) at C:\Users\esnil\.julia\packages\SDDP\tCaKk\src\algorithm.jl:548
[5] macro expansion at C:\Users\esnil\.julia\packages\SDDP\tCaKk\src\algorithm.jl:770 [inlined]
[6] macro expansion at C:\Users\esnil\.julia\packages\TimerOutputs\7Id5J\src\TimerOutput.jl:214 [inlined]
[7] iteration(::SDDP.PolicyGraph{Tuple{Int64,Int64}}, ::SDDP.Options{Tuple{Int64,Int64}}) at C:\Users\esnil\.julia\packages\SDDP\tCaKk\src\algorithm.jl:769
[8] master_loop at C:\Users\esnil\.julia\packages\SDDP\tCaKk\src\plugins\parallel_schemes.jl:24 [inlined]
[9] train(::SDDP.PolicyGraph{Tuple{Int64,Int64}}; iteration_limit::Int64, time_limit::Nothing, print_level::Int64, log_file::String, log_frequency::Int64, run_numerical_stability_report::Bool, stopping_rules::Array{SDDP.AbstractStoppingRule,1}, risk_measure::SDDP.Expectation, sampling_scheme::SDDP.InSampleMonteCarlo, cut_type::SDDP.CutType, cycle_discretization_delta::Float64, refine_at_similar_nodes::Bool, cut_deletion_minimum::Int64, backward_sampling_scheme::SDDP.CompleteSampler, dashboard::Bool, parallel_scheme::SDDP.Serial) at C:\Users\esnil\.julia\packages\SDDP\tCaKk\src\algorithm.jl:971
[10] top-level scope at C:\Users\esnil\Dropbox\Proyecto_Energia\Julia-1-4-0\SDDP\dinamico_stoch_markov_v9.1_check_parallel_mejorado5.jl:393
[11] include(::String) at .\client.jl:439
[12] top-level scope at REPL[2]:1
in expression starting at C:\Users\esnil\Dropbox\Proyecto_Energia\Julia-1-4-0\SDDP\dinamico_stoch_markov_v9.1_check_parallel_mejorado5.jl:393
Interesting. Can you email me code to reproduce?
If you have 140 scenarios, using multicut is likely to slow things down, so I would recommend just using single-cut.
Thanks for sending the code. Here is a minimal reproducer.
using SDDP
using GLPK
model = SDDP.MarkovianPolicyGraph(
transition_matrices = [[0.5 0.5], [1.0 0.0; 0.4 0.6]],
lower_bound = 0.0,
optimizer = GLPK.Optimizer
) do sp, node
@variable(sp, x, SDDP.State, initial_value = 0)
@stageobjective(sp, 0.0)
end
SDDP.train(model; cut_type = SDDP.MULTI_CUT)
--------------------------------------------------------------------------------
SDDP.jl (c) Oscar Dowson, 2017-20
Numerical stability report
Non-zero Matrix range [0e+00, 0e+00]
Non-zero Objective range [1e+00, 1e+00]
Non-zero Bounds range [0e+00, 0e+00]
Non-zero RHS range [0e+00, 0e+00]
No problems detected
Solver: serial mode
Iteration Simulation Bound Time (s) Proc. ID # Solves
┌ Warning: You haven't specified a stopping rule! You can only terminate the call to SDDP.train via a keyboard interrupt ([CTRL+C]).
└ @ SDDP ~/.julia/packages/SDDP/f3iyy/src/algorithm.jl:840
1 0.000000e+00 0.000000e+00 8.358955e-04 1 5
ERROR: Expected 2 local θ variables but there were 1.
Stacktrace:
[1] error(::String) at ./error.jl:33
[2] _add_locals_if_necessary(::SDDP.Node{Tuple{Int64,Int64}}, ::SDDP.BellmanFunction, ::Int64) at /Users/oscar/.julia/packages/SDDP/f3iyy/src/plugins/bellman_functions.jl:463
[3] refine_bellman_function(::SDDP.PolicyGraph{Tuple{Int64,Int64}}, ::SDDP.Node{Tuple{Int64,Int64}}, ::SDDP.BellmanFunction, ::SDDP.Expectation, ::Dict{Symbol,Float64}, ::Array{Dict{Symbol,Float64},1}, ::Array{SDDP.Noise,1}, ::Array{Float64,1}, ::Array{Float64,1}) at /Users/oscar/.julia/packages/SDDP/f3iyy/src/plugins/bellman_functions.jl:348
[4] backward_pass(::SDDP.PolicyGraph{Tuple{Int64,Int64}}, ::SDDP.Options{Tuple{Int64,Int64}}, ::Array{Tuple{Tuple{Int64,Int64},Any},1}, ::Array{Dict{Symbol,Float64},1}, ::Array{Tuple{},1}, ::Array{Tuple{Int64,Dict{Tuple{Int64,Int64},Float64}},1}) at /Users/oscar/.julia/packages/SDDP/f3iyy/src/algorithm.jl:486
[5] macro expansion at /Users/oscar/.julia/packages/SDDP/f3iyy/src/algorithm.jl:685 [inlined]
[6] macro expansion at /Users/oscar/.julia/packages/TimerOutputs/dVnaw/src/TimerOutput.jl:190 [inlined]
[7] iteration(::SDDP.PolicyGraph{Tuple{Int64,Int64}}, ::SDDP.Options{Tuple{Int64,Int64}}) at /Users/oscar/.julia/packages/SDDP/f3iyy/src/algorithm.jl:684
[8] master_loop at /Users/oscar/.julia/packages/SDDP/f3iyy/src/plugins/parallel_schemes.jl:24 [inlined]
[9] train(::SDDP.PolicyGraph{Tuple{Int64,Int64}}; iteration_limit::Nothing, time_limit::Nothing, print_level::Int64, log_file::String, log_frequency::Int64, run_numerical_stability_report::Bool, stopping_rules::Array{SDDP.AbstractStoppingRule,1}, risk_measure::SDDP.Expectation, sampling_scheme::SDDP.InSampleMonteCarlo, cut_type::SDDP.CutType, cycle_discretization_delta::Float64, refine_at_similar_nodes::Bool, cut_deletion_minimum::Int64, backward_sampling_scheme::SDDP.CompleteSampler, dashboard::Bool, parallel_scheme::SDDP.Serial, forward_pass::SDDP.DefaultForwardPass) at /Users/oscar/.julia/packages/SDDP/f3iyy/src/algorithm.jl:890
[10] top-level scope at REPL[53]:1
The issue is some interaction between Markovian policy graphs with 0.0 in the transition matrix and the multi-cut implementation.
Closing because this seems fixed now?
julia> using SDDP
julia> using HiGHS
julia> model = SDDP.MarkovianPolicyGraph(
transition_matrices = [[0.5 0.5], [1.0 0.0; 0.4 0.6]],
lower_bound = 0.0,
optimizer = HiGHS.Optimizer
) do sp, node
@variable(sp, x, SDDP.State, initial_value = 0)
@stageobjective(sp, 0.0)
end
A policy graph with 4 nodes.
Node indices: (1, 1), (1, 2), (2, 1), (2, 2)
julia> SDDP.train(model; cut_type = SDDP.MULTI_CUT, iteration_limit = 3)
------------------------------------------------------------------------------
SDDP.jl (c) Oscar Dowson and SDDP.jl contributors, 2017-22
Problem
Nodes : 4
State variables : 1
Scenarios : 3.00000e+00
Existing cuts : false
Subproblem structure : (min, max)
Variables : (3, 3)
VariableRef in MOI.LessThan{Float64} : (1, 1)
VariableRef in MOI.GreaterThan{Float64} : (1, 1)
Options
Solver : serial mode
Risk measure : SDDP.Expectation()
Sampling scheme : SDDP.InSampleMonteCarlo
Numerical stability report
Non-zero Matrix range [0e+00, 0e+00]
Non-zero Objective range [1e+00, 1e+00]
Non-zero Bounds range [0e+00, 0e+00]
Non-zero RHS range [0e+00, 0e+00]
No problems detected
Iteration Simulation Bound Time (s) Proc. ID # Solves
1 0.000000e+00 0.000000e+00 2.614021e-03 1 6
2 0.000000e+00 0.000000e+00 3.891945e-03 1 12
3 0.000000e+00 0.000000e+00 4.832983e-03 1 17
Terminating training
Status : iteration_limit
Total time (s) : 4.832983e-03
Total solves : 17
Best bound : 0.000000e+00
Simulation CI : 0.000000e+00 ± 0.000000e+00
------------------------------------------------------------------------------