Bence Bagi
Bence Bagi
I removed stochastic optimization from #365. An initial definition of the stochastic optimization interface for solvers is [saved on this branch](https://github.com/bagibence/nemos/tree/stochastic_solver_api).
> Since writing a generator / iterator that supplies batches is on a similar difficulty level for users as writing the optimization loop, functionality to create a generator given the...
@BalzaniEdoardo also [suggested disallowing linesearches in stochastic optimization](https://github.com/flatironinstitute/nemos/pull/365/files/3026936e46e148be431f70147191971c1443e001#r2242805876)
Similarly, to support the original Nesterov acceleration, adapt this to create a port of `GradientDescent`.
When this is done: - [x] Remove `OptimistixOptaxProximalGradient` - [x] Switch `OptimistixOptaxGradientDescent` to use `optax.scale_by_backtracking_linesearch` - [x] Remove `stateful_scale_by_learning_rate` (and related code) which is only needed for `OptimistixOptaxProximalGradient`
CI will fail because of two different reasons. The first one, that stopped execution now, is described in [this comment](https://github.com/flatironinstitute/nemos/pull/444#discussion_r2599064008). The other was introduced when merging the current `dev` and...
Adding a protocol that defines the interface and checks the existence of the required methods could be useful for checking solvers passed by the user. It doesn't check types though.
- [ ] Move solver-related documentation (e.g. about `OptimizationInfo`) from the developer notes to the API docs.
An alternative to disabling the solver-regularizer compatibility check could be to allow registering allowed solvers in regularizers. In this case a user could bring their own implementation of a solver...