torchdiffeq
torchdiffeq copied to clipboard
Sherman Morrison Method for Implicit Solvers
Closes #267 and Closes #214 .
Replaces the costly Jacobian inverse calculation for implicit solvers with the good Sherman-Morrison method.
Additionally, reorders the tableau for GL4 to be closer to its definition and switches the implicit trapezoid method to use the DIRK solver.
Tests
python tests/run_all.py
........\torchdiffeq\torchdiffeq\torchdiffeq\_impl\rk_common.py:554: UserWarning: Functional iteration did not converge. Solution may be incorrect.
warnings.warn('Functional iteration did not converge. Solution may be incorrect.')
\torchdiffeq\torchdiffeq\torchdiffeq\_impl\rk_common.py:464: UserWarning: Functional iteration did not converge. Solution may be incorrect.
warnings.warn('Functional iteration did not converge. Solution may be incorrect.')
...........\scipy\integrate\_ivp\ivp.py:621: UserWarning: The following arguments have no effect for a chosen solver: `min_step`.
solver = method(fun, t0, y0, tf, vectorized=vectorized, **options)
\scipy\integrate\_ivp\rk.py:505: UserWarning: The following arguments have no effect for a chosen solver: `min_step`.
super().__init__(fun, t0, y0, t_bound, max_step, rtol, atol,
...
----------------------------------------------------------------------
Ran 22 tests in 343.978s
OK
@rtqichen this was a bigger update. For my work, the Jacobian inverse would take too much memory if it were dense. Following this paper, I switched it to sparse and it can actually run faster now. Additionally, cleaned up the __init__ with the super().