Oscar Dowson

Results 1419 comments of Oscar Dowson

Replace `optimizer = () -> Gurobi.Optimizer(env)` with `optimizer = Ipopt.Optimizer`. But it might not solve properly. Ipopt struggles with these sorts of cutting plane problems. You really need a good...

I don't think we need to solve problems though, we need to answer the conceptual question: > does a particular c(t) get to see y(t') for all t' > t?...

> you choose ct after observing yt, before y_t+1 Okay, we on the same page then. But that's not what your discrete-time Ipopt code above does?

> My discrete time Ipopt code is non-stochastic Yeah haha I'm getting confused with the code across the two issues. I meant the code that produced the figures with the...

SDDP.jl can solve the discrete-time deterministic and stochastic cases, for both the finite horizon and the infinite horizon cases. For the infinite case, you just need to replace the `LinearPolicyGraph`...

> As it currently stands the docs in packages like SDDP & POMDPs appear far too foreign for us to invest time in trying to use them. I'd love to...

Here's a model that @jd-lara had been running We can also find risk-averse policies, so swap out that E operator for your choice of convex risk measure. https://odow.github.io/SDDP.jl/stable/guides/add_a_risk_measure/ https://odow.github.io/SDDP.jl/stable/tutorial/theory/22_risk/ We...

> Is it possible to show me how to solve the example you did above https://github.com/odow/SDDP.jl/issues/429#issuecomment-900610201 > Replace `optimizer = () -> Gurobi.Optimizer(env)` with `optimizer = Ipopt.Optimizer`. But it might...

Yeah it's probably very slow and might crash/warn about numerical instability?

This notation looks better. Nice to see that `c_t` isn't linear, and is a function of the history of the random walk. I'll see if I have time later to...