Joshua Pulsipher

Results 49 comments of Joshua Pulsipher

For my research, this is a win with the use of PyTorch surrogates via MathOptAI and other external functions in optimal control problems.

@odow I see you have implemented this for Ipopt.jl and HiGHS.jl. Is Gurobi.jl still on the todo list?

For dynamic optimization, commonly we would have one network used over many time steps. And for training, we would pose it as a parameter estimation problem with multiple dynamic trajectory...

For reference, Gekko provides an API for training NNs via 2nd order solvers (e.g., Ipopt) in the Gekko AML: https://gekko.readthedocs.io/en/latest/brain.html

Sequential networks with convolutional layers like GCNConv and [Conv2d](https://docs.pytorch.org/docs/stable/generated/torch.nn.Conv2d.html) would cover most of the use cases I have encountered.

We do frequently use different settings for kernel size, in/out channels, padding, and stride. I haven't personally needed to mess with the dilation or groups. Here is a simple example...

I'm afraid the 2D convolutional layers fundamentally depend on matrix inputs, vectorizing leads to a loss in information (i.e., correlation/patterns with neighboring elements).

Please get the docs to pass, so I can review

Also, is there a way to avoid using DifferentialEquations in this tutorial? Including it, increases the documentation build time to 15 minutes (up from 7 minutes). Since we only do...