SDDP.jl
SDDP.jl copied to clipboard
Error when using nonlinear function of state.in
I am trying to solve a problem that is dependent on a nonlinear function of both the incoming state (that sould be a parameter) and the uncertainty.
However, since both the uncertainty and the incoming state are treated as variables in JuMP I am getting the following error:
ERROR: MathOptInterface.UnsupportedConstraint{MathOptInterface.ScalarNonlinearFunction, MathOptInterface.GreaterThan{Float64}}: `MathOptInterface.ScalarNonlinearFunction`-in-`MathOptInterface.GreaterThan{Float64}` constraint is not supported by the model.
Here is a minimum working case based on one of SDDP.jl documentation examples, where we demonstrate the error on a trivial constraint. Is there a way around?
using SDDP, Gurobi, Test
function fast_hydro_thermal()
model = SDDP.LinearPolicyGraph(;
stages = 2,
upper_bound = 0.0,
sense = :Max,
optimizer = Gurobi.Optimizer,
) do sp, t
@variable(sp, 0 <= x <= 8, SDDP.State, initial_value = 0.0)
@variables(sp, begin
y >= 0
p >= 0
ξ
end)
@constraints(sp, begin
p + y >= 6
x.out <= x.in - y + ξ
end)
RAINFALL = (t == 1 ? [6] : [2, 10])
SDDP.parameterize(sp, RAINFALL) do ω
return JuMP.fix(ξ, ω)
end
@stageobjective(sp, -5 * p)
@constraint(sp, 1/x.in*exp(ξ) >= 0)
end
SDDP.train(model)
return
end
fast_hydro_thermal()
For us this problem appears when we are implementing a specific stoachstic process for modeling the inflow inside the SDDP. Here are the equations for our case.
lower_bound = - (μ_stage) / σ_stage - sum(ϕ[j]*inflow[i, end - j + 1] for j in 1:p)
λ = (var_resid/lower_bound^2) + 1
μ = 0.5 * log(var_resid/ (λ * (λ - 1)))
σ = sqrt(log(λ))
ϵ[i] = exp(ξ*σ+ μ) + lower_bound
@constraint(subproblem, inflow_modeling[i = 1:n_hydro], inflow[i, 6].out == sum(ϕ[i, t][j]*inflow[i, end - j + 1].in for j in 1:p[i, t]) + ϵ[i])
Another way around would be to treat ϵ[i] as a parameter that is evaluated before solving the subproblem.
Just adding more context:
From what I understand, we want for SDDP to neglect the influence its states (inflow
) have on the uncertainty (ϵ
) when calculating the cuts but we want them to be considered when calculating the (ϵ
) (A type of time-inconsistency from a non-independence of the uncertainty).
ERROR: MathOptInterface.UnsupportedConstraint{MathOptInterface.ScalarNonlinearFunction, MathOptInterface.GreaterThan{Float64}}:
MathOptInterface.ScalarNonlinearFunction
-in-MathOptInterface.GreaterThan{Float64}
constraint is not supported by the model.
Gurobi does not support nonlinear constraints.
Can you provide a reproducible example of your real problem? 1 / x.in * exp(ξ) >= 0
is a meaningless constraint.
Also note that SDDP requires that the value function is convex with respect to the incoming state variables. Have you verified that this is true for your stochastic process?
Instead of using a variable and fix
, you can explicitly modify right-hand side terms using set_normalized_rhs
and constraint coefficients using set_normalized_coefficient
. A slightly modified version of your example:
using SDDP, Gurobi, Test
function fast_hydro_thermal()
model = SDDP.LinearPolicyGraph(;
stages = 2,
upper_bound = 0.0,
sense = :Max,
optimizer = Gurobi.Optimizer,
) do sp, t
@variable(sp, 0 <= x <= 8, SDDP.State, initial_value = 0.0)
@variables(sp, begin
y >= 0
p >= 0
end)
@constraints(sp, begin
p + y >= 6
# Right-hand side of 0.0 should be ξ
c_rhsξ, x.out - x.in + y <= 0.0
# 1.0 should be 1 / exp(ξ)
c_expξ, 1.0 * x.in >= 0.0
end)
@stageobjective(sp, -5 * p)
RAINFALL = (t == 1 ? [6] : [2, 10])
SDDP.parameterize(sp, RAINFALL) do ω
set_normalized_rhs(c_rhsξ, ω)
set_normalized_coefficient(c_expξ, x.in, 1 / exp(ω))
return
end
end
SDDP.train(model)
return
end
fast_hydro_thermal()
From what I understand, we want for SDDP to neglect the influence its states
I'm not sure I understand this part. Are you working together?
I'm not sure I understand this part. Are you working together?
I was just helping him out. He want's to evaluate another type of time-inconsistency in the hydrothermal economic dispatch.
Also note that SDDP requires that the value function is convex with respect to the incoming state variables. Have you verified that this is true for your stochastic process?
Yeah, this is not the case, because of the non-linear dependency of the actualized inflow with respect to previous inflow (inflow comes from a non-linear autoregressive). Nevertheless, we want to neglect this dependency when calculating the cuts.
I will leave to @arthur-brigatto to provide a MWC for the true problem.
Yes, the stochastic process is non-convex on the state variable and time-dependent. This is not compatible with SDDP requirements, but a similar approach is actually used in Brazilian official models. I wanted to test the approach on SDDP.jl.
I haven't been able to come up with a working case, so I will try to describe what I am trying to achieve. Usually, we would model the stochastic process inside of each subproblem as follows:
SDDP.parameterize(sp, Ω) do ω
return JuMP.fix(ξ, ω)
end
@constraint(sp, inflow.out = ϕ*inflow.in + ξ)
However, in my case ξ is not simply a regular sample from a given distribution. It is actually a function of inflow.in and random sample from a normal distribution. Hence, I have to construct the noise after sampling from a normal distribution and observing noise.in. So one way would be to evaluate ξ for each subproblem using decision variables and constraints similarly to what I described on my previous comment. But this leads to non-convexities and I am now convinced that it will not work very well. Or I could try something like this:
SDDP.parameterize(sp, Ω) do ω
return JuMP.fix(ξ, non_convex_function(JuMP.value(inflow.in), ω))
end
@constraint(sp, inflow.out = ϕ*inflow.in + ξ)
Where non_convex_function() generates ξ by considering the last evaluated value for the state inflow, preferably as a Float64. In this approach, non-convexities and dependencies would be 'hidden' from SDDP, which is acceptable for my application. However, I believe that SDDP.jl does not rebuild each subproblem during the convergence procedure, and therefore, this approach would also not work. Is this correct? And if so, is there a way around it?
There is no way around this.
You will need to do something like Andrew's paper, where you train a policy with a convex approximation of the inflow dynamics, and then simulate with the non-convex dynamics.
You could start with
SDDP.parameterize(sp, Ω) do ω
return JuMP.fix(ξ, ω)
end
@constraint(sp, inflow.out = ϕ*inflow.in + ξ)
and assume that the inflows are stagewise independent.
But you could simulate the non-convex dynamics by a-priori constructing a set of simulations and using Historical
:
https://sddp.dev/stable/apireference/#SDDP.Historical
SDDP.train(model; sampling_scheme = my_sampling_scheme)
Alternatively if the inflow is univariate, then you could approximate the inflows by a Markov chain: https://sddp.dev/stable/guides/create_a_general_policy_graph/#Creating-a-Markovian-graph-automatically
But you could simulate the non-convex dynamics by a-priori constructing a set of simulations and using Historical
Is there a way to fetch the value of inflow.in
? since it is an input of the calculation of ξ
.
No. Assuming that your inflows are just a function of ξ
, you'd need to compute them when creating the historical simulator.
I have been talking to the folks at PSR about this.
We should modify:
https://github.com/odow/SDDP.jl/blob/master/src/plugins/backward_sampling_schemes.jl#L15-L31
to include:
sample_backward_noise_terms_with_state(sampler::MonteCarloSampler, node::Node, state::Dict{Symbol,Float64})
and then we can write a custom backwards pass that changes how the sample happens.
Great idea! I will have a chat with @arthur-brigatto and submit a PR as soon as possible.
@arthur-brigatto @andrewrosemberg not sure if you’ve already started working on the implementation but I’ve been thinking about a similar problem and would like to chat with you to understand how you’re thinking to implement this (and possibly contribute)
@andrewrosemberg @grochacunha just added a pull request for this. Have a look.
I implemented it as suggested by @odow, and now it would be possible to create a custom sampling scheme that is conditional on the state in the backward step.
For my specific application, I used:
struct StateConditionalSampler <: AbstractBackwardSamplingScheme
number_of_backward_samples::Int
end
function sample_backward_noise_terms_with_state(sampler::StateConditionalSampler, node::Node, state::Dict{Symbol,Float64})
prob = 1 / number_of_backward_samples
return [Noise(non_linear_function(state), prob) for_ in 1:sampler.number_of_backward_samples]
end
This worked fine in my case.