JumpProcesses.jl icon indicating copy to clipboard operation
JumpProcesses.jl copied to clipboard

handling extinction for SSAs that use a running and updating sum_rate

Open isaacsas opened this issue 4 years ago • 3 comments

RSSA, RSSACR, DirectCR and SortingDirect all encounter the problem of the rates actually being zero but the current sum of the rates being a bit bigger than zero due to floating point errors in the running sum. This led to crashes for the first three when trying to sample the next reaction in a spatial A+B -> 0 system (SortingDirect ends up just sampling the last reaction, but this shouldn't matter since no further reaction should occur).

I'll put in a PR with a fix, but this is worth investigating more to make sure the fix doesn't hurt performance, and isn't missing any further cases.

isaacsas avatar Aug 19 '20 01:08 isaacsas

Instead of checking if sum_rate is zero we can check that the current state vector is the zero vector. This is, of course, linear in the number of species, but the number of species is usually considerably smaller than the number of reactions. This can be improved by adding a field num_molecules to the aggregator to update the total number of molecules left, s.t. the check can be done in constant time. A problem with this approach is that there can be rate functions that are zero while there are some molecules still present.

Another idea would be to compare sum_rate with some no-so-small value, e.g. 10^-6, instead of eps(sum_rate), and if sum_rate < 10^-6, then actually recalculate sum_rate. The hope is that floating point error will never accumulate to the point of exceeding 10^-6 or whatever value is chosen, so if sum_rate > 10^-6, then it is definitely non-zero. On the other hand, the event that sum_rate < 10^-6 is rare enough that recalculating sum_rate will happen very infrequently.

Vilin97 avatar Aug 19 '20 16:08 Vilin97

A first fix is in https://github.com/SciML/DiffEqJump.jl/pull/149, see also comments there.

isaacsas avatar Aug 19 '20 16:08 isaacsas

From Slack: we could try to make nomorejumps! more robust though. perhaps by calculating the mean time to the next reaction (which we can do as 1/sum_rate) and if it is way beyond end time we end. (ie. the probability of a jump with that mean by end time is below some tolerance).

isaacsas avatar Aug 19 '20 17:08 isaacsas