DACBench icon indicating copy to clipboard operation
DACBench copied to clipboard

Wrapping existing optimisers

Open Steven-Adriaensen opened this issue 4 years ago • 0 comments

The current SGD benchmark currently (only) supports controlling the learning rate of various optimisers (Adam, Momentum, RMSprop). rather than using the existing pytorch implementations (torch.optim), these optimisers are re-implemented in the benchmark.

This enhancement would wrap existing pytorch optimizers and in a generic way support dynamically reconfiguring their learning rate.

Benchmark configuration should support:

  • Passing a pytorch optimizer classname
  • Passing the initial optimizer args (including lr)

Various changes to the environment are required to separate out the optimizer-specific aspects, and extract the info required to compute the state features (see comments in code)

Steven-Adriaensen avatar Jul 08 '21 09:07 Steven-Adriaensen