cvxpylayers
cvxpylayers copied to clipboard
some bug patches + torch no_grad forward pass
Continuing the work that @PTNobel and I started in the diffcp repository to address https://github.com/cvxpy/cvxpy/issues/2485.
Specifically, used the new diffcp
solve_only_batch
call chain in the torch CvxpyLayer when gradients are not desired for reverse autodiff. This functionality can be accessed by
- Setting all parameter tensors which will be passed into the
CvxpyLayer
to not require a gradient. - Making the
layer(param1, param2, ...)
call insidewith torch.no_grad()
For examples, see the last two tests in torch/test_cvxpylayer.py.
Additionally, I patched two errors in the test file. Note that the tests
-
test_basic_gp
-
test_lml
-
test_simple_batch_socp
failed prior to theCvxpyLayer
additions that I made. The gp failure appears to be due to difference in solutions obtained bycvxpylayers
and purecvxpy
. The second two failed tests are due to small(ish) Jacobian mismatches.
The next steps (I think) to complete issue https://github.com/cvxpy/cvxpy/issues/2485 are
- Release the new version of
diffcp
with thesolve_only
functionality (I used my local copy to make thesecvxpylayer
changes.) - See if this update provides any meaningful computational enhancements.
- Implement this no_grad functionality for JAX and TensorFlow layers.
However, please let me know if there are more suggestions and/or edits I need to make to this PR. Thanks!