cooper icon indicating copy to clipboard operation
cooper copied to clipboard

Restart dual_optimizer state when performing dual restarts

Open juan43ramirez opened this issue 3 years ago • 3 comments

Enhancement

When a dual restart is triggered, the dual variables are reset to their initial value 0. Nonetheless, the state of the primal and dual optimizer remains the same. This may include running averages for momentum mechanisms.

These could be reset along with dual restarts when feasibility is achieved.

Motivation

This would represent a full reset of the optimization protocol when the constraint is being satisfied. Currently, the reset is "half baked" in the sense that only dual variables are reset.

References

Reset the state of a Pytorch optimizer: https://discuss.pytorch.org/t/reset-adaptive-optimizer-state/14654/5

juan43ramirez avatar Jun 07 '22 23:06 juan43ramirez

This is a good idea.

This would for sure be problematic for the dual variables as the momentum "accumulation" during periods of feasibility might prevent the multiplier from moving in the right direction if the constraint becomes violated later.

Not sure whether this is as "problematic" for the primal optimizer. Maybe we could enable a flag to also reset the state of the primal optimizer upon dual restarts, but not force it.

gallego-posada avatar Jun 09 '22 02:06 gallego-posada

Perhaps we could maintain the primal optimizer's state.

What worries me is maintaining momentum "towards satisfying the constraints" that primal optimizers might have when reaching feasibility. Also, the running means have been accumulating possibly large values associated with $+ \lambda \nabla g$ (large $\lambda$ at satisfaction). These may mean biases in direction and aggressive decreases in magnitude for updates after restarts, which should only/mostly focus on the objective function.

That being said, (i) even if momentum and running means are slightly misleading, they have been computed (and will get updated) according to objective-heavy gradients and (ii) not sure if addressing these "issues" will have big practical implication.

juan43ramirez avatar Aug 24 '22 02:08 juan43ramirez

Modifying the state of the dual optimizers based on the feasibility of the constraint is challenging in general. It is manageable for optimizers like SGD with momentum, but could become very difficult for generic optimizers since the internal state might be"shared" across parameters. For example, an optimizer might keep track of the correlation in the gradient between different parameters.

The practical implications caused by this mis-alignment between the optimizer state and the reset value of the multiplier are unclear to me (and I guess they would depend on the type of optimizer).

For now I would suggest (1) simply performing the value reset, (2) leaving the optimizer state untouched and (3) documenting this pitfall explicitly in the Multiplier class.

gallego-posada avatar Feb 24 '23 20:02 gallego-posada