DACBench
DACBench copied to clipboard
Wrapping existing optimisers
The current SGD benchmark currently (only) supports controlling the learning rate of various optimisers (Adam, Momentum, RMSprop). rather than using the existing pytorch implementations (torch.optim), these optimisers are re-implemented in the benchmark.
This enhancement would wrap existing pytorch optimizers and in a generic way support dynamically reconfiguring their learning rate.
Benchmark configuration should support:
- Passing a pytorch optimizer classname
- Passing the initial optimizer args (including lr)
Various changes to the environment are required to separate out the optimizer-specific aspects, and extract the info required to compute the state features (see comments in code)