SDDP.jl
SDDP.jl copied to clipboard
Stochastic Dual Dynamic Programming in Julia
If you specify `lower_bound` but not `upper_bound`, then bad things happen.
Some feedback from a user in my emails (search "Suggestions for JuMP tutorial" 28 February). https://odow.github.io/SDDP.jl/stable/tutorial/basic/07_arma/ https://odow.github.io/SDDP.jl/stable/tutorial/advanced/11_objective_states/ ``` Doubts from the Vector auto-regressive model, model = SDDP.LinearPolicyGraph( stages = 3,...
Hi, Oscars! I am currently solving a problem with SDDP.jl (actually a replication of your Dairy Farm Model), and I came across this error when I try to train the...
Hello, so, for people coming from other communities, or if you are a limited programmer like me, getting started with SDDP.jl can be challenging. Some people come from the MDP...
**Is it true that MDP = Policy graphs without squiggling arrows, where transition probabilities are given by the p_{ij}'s?** If that is true, which I think it is, then I...
```Julia julia> using SDDP, GLPK julia> model = SDDP.LinearPolicyGraph( stages = 3, lower_bound = 0.0, optimizer = GLPK.Optimizer, ) do subproblem, t # ========================================================================== # Regular SDDP.jl section # #...
I am running a conic subproblem for SOCP relaxation of the OPF problem. For the first a few iteration of SDDP, it works fine. However, for the test case I...
@adow031 suggested this