SDDP.jl icon indicating copy to clipboard operation
SDDP.jl copied to clipboard

Document how to add auto-regressive stochastic process

Open odow opened this issue 2 years ago • 4 comments

julia> using SDDP, GLPK

julia> model = SDDP.LinearPolicyGraph(
           stages = 3,
           lower_bound = 0.0,
           optimizer = GLPK.Optimizer,
       ) do subproblem, t
           # ==========================================================================
           #     Regular SDDP.jl section
           #
           # Add variables constraints etc.
           N = 2  # num stocks
           @variable(subproblem, x[1:N] >= 0, SDDP.State, initial_value = 2)
           # ==========================================================================
           #     Price stuff. Assume a model like y_{t+1} = B_t y_t + e_t
           #
           # The sample space:
           #  - A vector of realizations
           #  - Each realization can by any data-type you want. Here I have the B
           #    matrix and the innovation term e
           Ω = [
               (B = [0.8 0.2; 0.2 0.8], e = [-0.1, -0.1]),
               (B = [0.9 0.1; 0.1 0.9], e = [0.1, 0.1]),
           ]
           # Value of y at the root node. This is a bit clunky because SDDP.jl uses
           # tuples as the datastructure for the prices. So either go:
           initial_value = (1.0, 2.0)
           # or use
           initial_value = tuple([1.0, 2.0]...)
           # Lower and upper limits of y. These don't actually have to be true limits,
           # but getting them right can help convergence because we add some extra cuts
           # there.
           lower_bound = tuple([0.0, 0.0]...)
           upper_bound = tuple([10.0, 20.0]...)
           # This constant will require some tuning.
           lipschitz = tuple([100.0 for _ in 1:N]...)
           # The magic part.
           SDDP.add_objective_state(
               subproblem,
               initial_value = initial_value,
               lower_bound = lower_bound,
               upper_bound = upper_bound,
               lipschitz = lipschitz,
           ) do y, ω
               # y is the incoming price as a tuple
               # ω is the realization of the uncertainty. We're going to define what it
               # is below, but it is an element of Ω.
               #
               # We also need some more hackery to convert the y tuple to a vector
               # in order to do linear algebra.
               new_y = ω.B * collect(y) + ω.e
               # Convert the new_y back to a tuple and return
               return tuple(new_y...)
           end
           # ==========================================================================
           # The regular SDDP.parameterize function
           SDDP.parameterize(subproblem, Ω) do ω
               y = SDDP.objective_state(subproblem)  # This is new_y
               @stageobjective(subproblem, sum(y[i] * x[i].out for i in 1:N))
           end
       end
A policy graph with 3 nodes.
 Node indices: 1, 2, 3


julia> SDDP.train(model, iteration_limit = 5)
------------------------------------------------------------------------------
                      SDDP.jl (c) Oscar Dowson, 2017-21

Problem
  Nodes           : 3
  State variables : 2
  Scenarios       : 8.00000e+00
  Existing cuts   : false
  Subproblem structure                      : (min, max)
    Variables                               : (7, 7)
    VariableRef in MOI.LessThan{Float64}    : (2, 3)
    AffExpr in MOI.GreaterThan{Float64}     : (4, 4)
    VariableRef in MOI.GreaterThan{Float64} : (5, 5)
    AffExpr in MOI.EqualTo{Float64}         : (8, 8)
Options
  Solver          : serial mode
  Risk measure    : SDDP.Expectation()
  Sampling scheme : SDDP.InSampleMonteCarlo

Numerical stability report
  Non-zero Matrix range     [1e+00, 2e+01]
  Non-zero Objective range  [1e+00, 2e+00]
  Non-zero Bounds range     [1e+02, 1e+02]
  Non-zero RHS range        [0e+00, 0e+00]
No problems detected

 Iteration    Simulation       Bound         Time (s)    Proc. ID   # Solves
        1    0.000000e+00   0.000000e+00   6.227016e-03          1          9
        2    0.000000e+00   0.000000e+00   7.569075e-03          1         18
        3    0.000000e+00   0.000000e+00   9.315968e-03          1         27
        4    0.000000e+00   0.000000e+00   1.087403e-02          1         36
        5    0.000000e+00   0.000000e+00   1.217604e-02          1         45

Terminating training
  Status         : iteration_limit
  Total time (s) : 1.217604e-02
  Total solves   : 45
  Best bound     :  0.000000e+00
  Simulation CI  :  0.000000e+00 ± 0.000000e+00
------------------------------------------------------------------------------

julia> s = SDDP.simulate(model, 2);

julia> prices = map(s[1]) do data
           return data[:objective_state]
       end
3-element Vector{Tuple{Float64, Float64}}:
 (1.2000000000000002, 2.0)
 (1.3800000000000003, 2.02)
 (1.4080000000000004, 1.792)

julia> prices = map(s[2]) do data
           return data[:objective_state]
       end
3-element Vector{Tuple{Float64, Float64}}:
 (1.1, 1.7)
 (1.2600000000000002, 1.7400000000000002)
 (1.2560000000000002, 1.5440000000000003)

odow avatar Jul 08 '21 20:07 odow

Hello Oscar,

so, the whole y thing is a bit mysterious, I will just copy that. But the whole \omega thing is confusing to me. You mention in the comments it is a realization of uncertainty that will be defined below. I don't see where. Is it here? new_y = ω.B * collect(y) + ω.e

I thought the \omega would be a normal random variable, as in a VAR (vector autoregressive, not value-at-risk!), or AR process. Can you please clarify this part? More precisely, how one would do a VAR or an AR using the objective states?

Best, Bernardo

bernardokp avatar Jul 14 '21 13:07 bernardokp

You mention in the comments it is a realization of uncertainty that will be defined below

It's the same ω as this one:

SDDP.parameterize(subproblem, Ω) do ω
   y = SDDP.objective_state(subproblem)  # This is new_y
   @stageobjective(subproblem, sum(y[i] * x[i].out for i in 1:N))
end

I thought the \omega would be a normal random variable

It can be anything. It just needs to be a finite discrete sample space. Of course, you may want to sample from some Normal distribution to generate it...

More precisely, how one would do a VAR or an AR using the objective states?

The example above, but probably with better values for B and e.

In most (all?) examples in the tutorial, uncertainty is too simple: it can have two or three values, and it is often one-dimensional. What if we had y_t = B_t^{\omega_t} y_{t-1} +b_t^{\omega_t}? How do we enter that into Julia using SDDP.jl? I could not find this case in one of the examples.

See the example above.

Do we sample normal random variables? Do we need to write this constraint explicitly? Do we generate the output on R and read into Julia???

You need to generate a discrete sample space for B and b. How you do this is up to you. You could generate it in R, save to a JSON file, and then use

import JSON
data = JSON.parsefile("data.json")

The Distributions package provides distributions in Julia

using Distributions
D = Normal(0, 1)
d = rand(D, 10)  # Sample 10 random values from D

As a matter of fact, I could not find simple AR examples in the documentation, which is by far the most popular process for hydrothermal models. Can you point us to examples where those cases are dealt with?

I don't know if there are any explicit examples. I should add one to the "how-to" section. I assumed it just followed naturally from the modeling (use a state-space expansion).

odow avatar Jul 15 '21 20:07 odow

Renamed this to reflect that we should add documentation on different RHS processes. I'm unlikely to do this, so if someone reading has an AR example and wants to add it, please reach out.

odow avatar Aug 15 '21 01:08 odow

@bernardokp would also like a GARCH one where the variance is accounted for: https://github.com/odow/SDDP.jl/issues/504

odow avatar Feb 11 '22 21:02 odow

Closing in favor of #496

odow avatar Dec 16 '22 01:12 odow