[BUG] StableLinearSR3 optimizer produces positive eigenvalues
@akaptano, I tried fitting a linear model with the StableLinearSR3 optimizer, but got some positive eigenvalues. Even fitting the model on a much smaller subset of the data shows a positive eigenvalue. This shouldn't happen, correct?
Reproducing code example:
import pysindy as ps
import numpy as np
arr = np.array([[
0.00241998, 0.0024565 , 0.0024934 , 0.00253064, 0.00256797,
0.00260541, 0.00264304, 0.00268111, 0.00271956, 0.00275811,
0.00279674, 0.00283547, 0.00287453, 0.00291383, 0.00295313,
0.00299244, 0.00303187, 0.00307165, 0.00311172, 0.00315183,
0.00319202, 0.00323241, 0.00327329, 0.00331463, 0.00335619,
0.00339798, 0.00344003, 0.00348257, 0.00352552, 0.00356858,
0.00361175, 0.00365513, 0.003699 , 0.00374399, 0.00378795,
0.00383279, 0.00387797, 0.00392373, 0.00397 , 0.00401651,
0.00406326, 0.00411034, 0.00415798, 0.00420617, 0.00425467,
0.00430352, 0.00435277, 0.00440262, 0.00445306, 0.00450388,
0.00455515, 0.0046069 , 0.00465932, 0.00471239, 0.00476588,
0.00481978, 0.00487411, 0.00492905, 0.0049846 , 0.00504053,
0.00509684, 0.00515355, 0.00521084, 0.00526867, 0.00532686,
0.00538541, 0.00544435, 0.00550388, 0.00556401, 0.00562462,
0.00568577, 0.00574756, 0.00581022, 0.00587367, 0.00593773,
0.00600229, 0.00606738, 0.00613315, 0.00619957, 0.00626635,
0.0063335 , 0.00640108, 0.00646921, 0.00653879, 0.00660665,
0.00667571, 0.006745 , 0.00681456, 0.00688445, 0.00695452,
0.00702482, 0.00709546, 0.00716651, 0.00723802, 0.00730987,
0.00738216, 0.00745493, 0.00752827, 0.0076021 , 0.00767616,
0.00775044, 0.00782501, 0.00790009, 0.00797571, 0.00805171,
0.00812811, 0.00820495, 0.00828238, 0.00836031, 0.00843843,
0.00851665, 0.00859499, 0.00867354, 0.00875231, 0.00883115,
0.00890997, 0.00898878, 0.00906774, 0.00914691, 0.00922605,
0.00930505, 0.00938389, 0.00946273, 0.00954161, 0.00962041,
0.00969925, 0.00977821, 0.00985742, 0.00993689, 0.01001627,
0.01009537, 0.0101742 , 0.01025287, 0.01033162, 0.01041033,
0.01048902, 0.01056778, 0.0106469 , 0.01072649, 0.01080632,
0.01088629, 0.01096636, 0.01104648, 0.01112654, 0.01120623,
0.01128551, 0.0113644 , 0.01144304, 0.01152158, 0.01159999,
0.0116784 , 0.01175693, 0.01183568, 0.01191456, 0.01199337,
0.01207224, 0.01215135, 0.01223087, 0.01231083, 0.01239098,
0.01247121, 0.01255145, 0.01263181, 0.01271236, 0.01279296,
0.01287365, 0.01295457, 0.01303587, 0.0131176 , 0.01319954,
0.01328167, 0.01336408, 0.01344692, 0.01353022, 0.01361374,
0.01369738, 0.01378113, 0.01386498, 0.01394896, 0.01403293,
0.01411686, 0.01420075, 0.01428469, 0.01436995, 0.01445278,
0.01453668, 0.01462042, 0.01470409, 0.01478778, 0.01487135,
0.01495473, 0.01503786, 0.01512088, 0.01520409, 0.01528736,
0.01537053, 0.0154535 , 0.01553644, 0.01561946, 0.01570222
]]).T
model = ps.SINDy(
differentiation_method = ps.SmoothedFiniteDifference(),
feature_library = ps.PolynomialLibrary(degree=1, include_bias=False),
feature_names = [f"v{mode}" for mode in range(n_modes)],
optimizer = ps.StableLinearSR3(),
)
model.fit(arr, t=5e-4)
model.print()
Result
(v0)' = 0.372 v0
Since the coefficient is a scalar, the eigenvalue is just the coefficient, which is greater than zero.
PySINDy/Python version information:
current master
The implementation is based on a stability-promotion term with a 1/nu in front. So adjust the value of nu to be smaller to make things stable (the stability is not guaranteed, and if the data sucks, this will be challenging anyways).
Ohhh, I think I'm looking for model.optimizer.coef_full_, which is -1e-8 in this example? I think there may be an issue with the docstring mixing up its variables:
Attempts to minimize the objective function
$$ 0.5\|y-Xw\|^2_2 + \lambda R(u) + (0.5 / \nu)\|w-u\|^2_2 \ $$
subject to
$$ Cu = d, Du = e, w \text{ negative definite} $$
where $R(u)$ is a regularization function
Later, in the argument description for nu, it says "Decreasing nu encourages u and v to be close", and I'm thinking these refer to variable names in a paper, rather than the docstring? Then later it says self.coef_ = v (self.coef_=coef_sparse) and self.coef_full_ = u (self.coef_full_=coef_negative_definite). No mention of w, and it seems u has switched to meaning w in the equation.
In trying to tease out the u vs v vs w, I saw StableLinearSR3._update_A() creates coef_negative_definite. Is it actually equal to partial minimization of $w$ in the above equation?