tadam solver
Trust-region embedded ADAM (TADAM) is an adaptation of ADAM which converges in the non-convex case. It relies on limiting the momentum contribution to ensure that the step is along a descent direction.
There are surely allocation issues due to solve_tadam_subproblem that will have to be fixed.
Some tests do not pass with Float16, but the method is not well-suited for deterministic problems.
Did you investigate why the tests didn't pass?
Did you investigate why the tests didn't pass?
tadam is an adaptation of ADAM, and ADAM is meant to train DNN in a stochastic context and tends to generate steps $s = -sign(\nabla f(x))$.
It looks like tadam is not very well adapted to solve deterministic problems and might even fail to solve simple ones.
I haven't looked into the detail why tadam fails for the low precision test (Float16), but it's very probably related to the above.
I understand. If it's just the Float16 bugging, then most likely it is a tolerance or parameter issue.