drake icon indicating copy to clipboard operation
drake copied to clipboard

Solve constrained optimization through augmented Lagrangian

Open hongkai-dai opened this issue 3 years ago • 3 comments

Currently Drake doesn't implement doing optimization with augmented Lagrangian method. We think this method can be very helpful for constrained optimization.

The tentative plan includes supporting the following features

  • [ ] Compute the augmented Lagrangian for a given mathematical program.
    • [x] Compute the non-smooth augmented Lagrangian. [PR #16391]
    • [x] Compute the smooth augmented Lagrangian
  • [ ] Solve the constrained optimization program using a black-box optimizer with augmented Lagrangian
    • [ ] Support a black-box optimizer in Drake [issue #16433]
    • [ ] Create a class to call this black box optimizer to minimize a sequence of augmented Lagrangian objective.
  • [ ] Solve the constrained optimization using a gradient-based solver with augmented Lagrangian.
    • [ ] Solve the smooth augmented Lagrangian with first-order method (gradient-descent, Adam, etc).
    • [ ] Solve the smooth augmented Lagrangian with second-order method (Newton with L-BFGS, etc).

hongkai-dai avatar Jan 26 '22 20:01 hongkai-dai

I also think that we should provide a method that constructs a new program as the augmented Lagrangian of the original problem. (e.g. AddDecisionVariables for the original decision variables, taking the current value for the multipliers as an argument, and then AddCost for the augmented Lagrangian cost), no? I guess we also need to provide the updates for the multipliers. But that way all of the existing solvers can be applied to the augmented Lagrangian formulation?

(Perhaps this is contained your "gradient-based solver" bullet...?)

RussTedrake avatar Mar 12 '22 14:03 RussTedrake

What I planned is like this

  1. Create a new solver like AugmentedLagrangianSolver, which takes in a solver ID (like SNOPT/IPOPT/NLOPT).
  2. Inside this solver, when calling Solve() with a MathematicalProgram with constraints, it will first construct a new MathematicalProgram corresponding to the augmented Lagrangian for some given Lagrangian multiplier and penalty terms. Then the solver (with the ID provided in step 1) will solve this un-constrained MathematicalProgram.
  3. It then updates the Lagrangian multiplier and the penalty term, and then go to step 2 again.

Does this make sense? One implementation is done in Anzu as https://github.shared-services.aws.tri.global/robotics/anzu/blob/master/common/nevergrad_al.py.

hongkai-dai avatar Mar 13 '22 06:03 hongkai-dai

Yes, that sounds great. I just wanted to make sure I understood the plan. Thanks.

RussTedrake avatar Mar 14 '22 10:03 RussTedrake