SDDP.jl icon indicating copy to clipboard operation
SDDP.jl copied to clipboard

ForwardPass plugins: Part II

Open odow opened this issue 3 years ago • 1 comments

In https://github.com/odow/SDDP.jl/issues/295, we introduced forwardness plugins. The motivation is for @andrewrosemberg's experiments with different network designs.

Why do we want different models

Transmission models are approximations of reality. If we train a model using a poor approximation, then we will obtain a poor policy.

The tricky thing is that we can solve the proper AC-OPF, but this is non-convex, so we can't use it in SDDP.jl.

What we really want to do is to simulate forward passes with the AC-OPF to obtain forward trajectories, and then refine the value functions on the backward pass using some convex approximation (e.g., DC with line losses).

It isn't sufficient to use DC on the forward pass because this will visit the "wrong" points in the state-space, and so the true AC simulation will be suboptimal.

Previous attempts

At present, it's easy to build-and-train a model using the same model on the forward and backward passes (e.g., NFA-NFA or DC-DC). However, you can't mix models, or use the nonlinear AC-OPF.

The previous code hacked around this https://github.com/andrewrosemberg/SDDP.jl/tree/forw_diff_back, but it's now well out-of-date.

Proposed solution

The ideal solution to this is for SDDP.jl to have some notion of a separate models on the forward and backward pass. However, this is a pretty niche request, and would lead to double the memory usage. I'm not going to do this.

Instead, we can leverage the forward pass plugins as follows.

  • Build a DC model and an AC model.
  • Train the DC model for N iterations.
  • Write the cuts from DC to file and load them into the AC problem
  • Simulate the AC problem N times
  • Use those simulations as a plugin for the next training of the DC problem

This logic should be able to be encapsulated within a forward pass problem so that it appears pretty seamless.

Code

Here's a quick sketch of the possible implementation:

mutable struct RosembergForwardPass{T} <: SDDP.AbstractForwardPass
    forward_model::SDDP.PolicyGraph{T}
    batch_size::Int
    batches::Vector{Any}
    counter::Int

    function RosembergForwardPass(;
        forward_model::SDDP.PolicyGraph{T},
        batch_size::Int,
    ) where {T}
        return new{T}(forward_model, batch_size, Any[], 0)
    end
end

function SDDP.forward_pass(
    model::PolicyGraph,
    options::Options,
    fp::RosembergForwardPass,
)
    fp.counter += 1
    if fp.counter > length(fp.batches)
        empty!(fp.batches)
        _fetch_new_simulations(model, fp)
        fp.counter = 1
    end
    return batches[fp.counter]
end

function _fetch_new_simulations(
    model::SDDP.PolicyGraph,
    options::Options,
    fp::RosembergForwardPass,
)
    # Update the cuts
    # We probably need some extra stuff here. You only need to load cuts not already
    # added, etc. Alternatively, we could just rebuild the forward_model every time?
    SDDP.write_cuts_to_file(model, "cuts.csv")
    SDDP.read_cuts_from_file(fp.forward_model, "cuts.csv")
    # Get new forward passes
    fp.batches = [
        SDDP.forward_pass(fp.forward_model, options, SDDP.DefaultForwardPass())
        for _ in 1:fp.batch_size
    ]
    return
end

Pros and cons

The benefit of this approach is that it is simple.

The downsides are that:

  • it requires two SDDP.PolicyGrpah models (although I don't see a way of avoiding this)
  • Batching the passes may slow convergence, although given the nature of the experiment, that's not a high priority. You could set the batch size to 1, but that would just result in lots of file IO moving the cuts across.

odow avatar Jun 14 '21 09:06 odow

@andrewrosemberg does this seem reasonable? Is it easy to build the forward and backward models with the same state variables?

odow avatar Jun 14 '21 09:06 odow

I'm not sure how I missed this! This went to my spam email which makes me very sad.

It looks great. I need to remember all the pitfalls I had implementing it the first time, but this seems to be in the right direction. I will try to post here all the important points I needed to check the last time.

andrewrosemberg avatar Jan 13 '23 11:01 andrewrosemberg

The first ones I remember (which @odow already mentions here):

  • [ ] I used cut.constraint_ref as the unique id to check if the cut already exists. However, I had to add a list of constraint references in model.ext. I don't know if this should live here or in the user code.
  • [ ] We must ensure that the state variables are named similarly in both models. Since my implementation built the forward and backward model together, this wasn't a problem. However, here we either need to assume the user will guarantee it is the same and perhaps add a check or we can add a mapping argument to RosembergForwardPass.

andrewrosemberg avatar Jan 13 '23 11:01 andrewrosemberg

This is really asking for a function to compute the convex relaxation of an arbitrary JuMP model. But that's a different ballpark. Instead of having separate models, we should really just have to subproblems within each node.

odow avatar May 01 '23 10:05 odow

A good way to get started is a function that implements Andrew's algorithm. Pseudo code would look like:

nonconvex_model = SDDP.PolicyGraph(...)
convex_model = SDDP.PolicyGraph(...)
function train_nonconvex(nonconvex_model, convex_model)
    while _ in 1:50
        passes = SDDP.simulate(nonconvex_model, 100)
        new_forward_pass = NewForwardPass(passes)
        SDDP.train(convex_model, iteration_limit = 100, forward_pass = new_forward_pass)
        copy_cuts_from_model(nonconvex_model, convex_model)
    end
end

For now, you could imagine that nonconvex_model and convex_model are two copies of same model. Don't worry if they're different, just check that you can train and copy a set of cuts from one model to another.

odow avatar May 10 '23 10:05 odow

I had to check if the cuts were already in the model because the function copy_cuts_from_model passed all cuts at every iteration, which made the number of cuts increase exponentially.

How I did it: https://github.com/andrewrosemberg/SDDP.jl/commit/895a5b91685868c87763c33775b68ae1a37d6d23

I can help out if needed!

@odow would it be ok to add such a check in copy_cuts_from_model ?

andrewrosemberg avatar May 10 '23 19:05 andrewrosemberg

We actually probably have enough code in the asynchronous stuff to make this work: https://github.com/odow/SDDP.jl/blob/a382ea96a6531c774eafa6fa0e73dbede64e83a6/src/plugins/parallel_schemes.jl#L135 https://github.com/odow/SDDP.jl/blob/a382ea96a6531c774eafa6fa0e73dbede64e83a6/src/plugins/parallel_schemes.jl#L167-L187

What about something like:

nonconvex_model = SDDP.PolicyGraph(...)
convex_model = SDDP.PolicyGraph(...)
function train_nonconvex(nonconvex_model, convex_model)
    has_converged = false
    options = SDDP.Options(...TODO...)
    while !has_converged
        passes = SDDP.simulate(nonconvex_model, 1)
        options.forward_pass = NewForwardPass(passes ... TODO...)
        result = SDDP.iterate(convex_model, options)
        has_converged = result.has_converged
        SDDP.slave_update(nonconvex_model, result)
    end
    return
end

odow avatar May 10 '23 20:05 odow

Just to clarify: SDDP.iteration has a forward_pass call inside already. Would we still need the SDDP.simulate(nonconvex_model, 1) before?

Or can we just:

nonconvex_model = SDDP.PolicyGraph(...)
convex_model = SDDP.PolicyGraph(...)
function train_nonconvex(nonconvex_model, convex_model)
    has_converged = false
    options = SDDP.Options(...TODO...)
    while !has_converged
        options.forward_pass = NewForwardPass(nonconvex_model, 1)
        result = SDDP.iteration(convex_model, options)
        has_converged = result.has_converged
        SDDP.slave_update(nonconvex_model, result)
    end
    return
end

andrewrosemberg avatar May 11 '23 00:05 andrewrosemberg

I guess I haven't really thought through the details. It's probable that it's a lot more work than I'm thinking.

odow avatar May 11 '23 01:05 odow

I will try it out and post here what I found.

andrewrosemberg avatar May 11 '23 13:05 andrewrosemberg

Here you go: https://github.com/odow/SDDP.jl/pull/611

odow avatar May 13 '23 07:05 odow

It only took us four years, but we got there...

odow avatar May 14 '23 03:05 odow