Catalyst.jl icon indicating copy to clipboard operation
Catalyst.jl copied to clipboard

Support very large reaction systems (10^9 reactions)

Open abhinavsns opened this issue 4 years ago • 12 comments

Working with a large number of reactions with species X_1 to X_100 and many others, I am unable to access the index of say, X_50 from speciesmap().

This is due to the key type Term{Real} and there is no information about how to cast strings to this type to access the location of the required species.

abhinavsns avatar Nov 29 '20 02:11 abhinavsns

As you noticed; species are now represented as ModelingToolkit.Term{Real} since we store their functional representation, i.e. S(t) and not S. You can see how to construct such objects in the ModelingToolkit tutorials, e.g. look at: https://mtk.sciml.ai/dev/tutorials/symbolic_functions/#Non-Interactive-Development-(No-Macro-Version)-1

For example, you can build X_1(t) like:

t = Num(Variable{ModelingToolkit.Parameter{Real}}(:t))  # t
x = Num(Variable{ModelingToolkit.FnType{Tuple{Any},Real}}(:x,1))(t)

or like

t = Num(Variable{ModelingToolkit.Parameter{Real}}(:t))  # t
x = Num(Variable{ModelingToolkit.FnType{Tuple{Any},Real}}(Symbol("x"),1))(t)

isaacsas avatar Nov 29 '20 17:11 isaacsas

And of course, you can use the ModelingToolkit macros if you aren't trying to do this programmatically:

@parameters t
@variables u[1:100](t)

isaacsas avatar Nov 29 '20 17:11 isaacsas

Yes, it should definitely get a shorthand. I mentioned this to @shashi and @yingboma a bit ago.

ChrisRackauckas avatar Nov 29 '20 18:11 ChrisRackauckas

Thank you for the prompt response. I am still new to Julia and trying to port my code. I realized that I can copy the dictionary from speciesmap() and cast the key to string. Dict(string(key) => value for (key, value) in speciesmap(reactions)); This makes it possible to find the index later.

The Catalyst project overall looks very cool. I plan to simulate reaction systems of the order 10^9. Are there any benchmarks which I can look at to see the performance that I should expect and if parallelization can be used via GPU compiled functions and so on?

abhinavsns avatar Nov 29 '20 21:11 abhinavsns

That’s 10^9 reactions?

Multithreading should work; but I’m not sure about GPU currently. @ChrisRackauckas can you comment on that?

isaacsas avatar Nov 29 '20 23:11 isaacsas

Multithreading will work. For GPU, I wouldn't think your reaction set is SPMD is it?

ChrisRackauckas avatar Nov 29 '20 23:11 ChrisRackauckas

@isaacsas Yes, upto 10^9 reactions. Currently, I tried upto 40k and it takes some time to process the @reaction_network which I include via the include("file.jl") where the file.jl is generated by python script (Please let me know if there is a better and faster way to do this). Further ODEProblem() also takes a long time and eventually solve as well.

@ChrisRackauckas I am not sure if the problem is SPMD or not (It is just mass action system). What I was wondering is if it is possible to accelerate the computations of the right hand side of the ODE integrator using multithreading or CUDA.

It will be great if you can try looking at this code and help me identify the bottleneck or better way of using catalyst.: https://github.com/abhinavsns/MLCRN/blob/main/CRN-Simulator.ipynb I have uploaded 2 sets of networks with approx 4000 reactions("100.jl") and ~40k reaction ("1000.jl"). Currently, performance is similar to my python implementation. For some reason my mac os only used a single core while running this.

abhinavsns avatar Nov 30 '20 00:11 abhinavsns

Oh wow, that's pretty big. ModelingToolkit recently received some updates to improve performance, and we've just started working on optimizing for larger systems, though we are looking at systems much smaller than that currently (1e5-1e6 reactions I'd say right now). However, here by optimizing I mean optimizing the steps to generate the Julia function that encodes the ODEs, not optimizing the ODE solvers themselves. (The way Catalyst works is to generate a symbolic model of the reaction system in ModelingToolkit, which ModelingToolkit then converts into a symbolic ODE equation model, from which a Julia function that evaluates the ODE derivative function is generated. This function is then used in the OrdinaryDiffEq ODE solvers when you call solve.)

So, I think we will hopefully make progress on speeding up the generation of such systems/functions, but I can't say, especially for systems that large, we'll have anything available in the short term.

But there are separate issues here; are you finding the ODE solvers themselves are actually slower than Python? The ODE solve is what happens in the actual solve call you make. Everything before that is system generation using the symbolic models. solve being slow might indicate you aren't using a good time-stepping method for your problem.

I notice a lot of subscripts in your model. Is this encoding some type of spatial or graph structure?

isaacsas avatar Nov 30 '20 01:11 isaacsas

Further ODEProblem() also takes a long time and eventually solve as well.

Two time call? I think the majority could be compilation time here. There's a superlinear scaling in LLVM for larger functions, and you're definitely going to hit that at this size. I'm curious about the run time right now, not the compile time, since I know that is changing drastically in v1.6 so I'd rather measure that directly on the newer LLVM version.

You can multithread it, and indeed enabling the mulithreading on the Jacobian, amking it sparse, is likely the biggest speedup you can get.

ChrisRackauckas avatar Nov 30 '20 01:11 ChrisRackauckas

In the BCR testing I also found ODEProblem on an ODESystem was very expensive. In that case as expensive as the first ODE solve call with KenCarp4-1. The default is to use RuntimeGeneratedFunction right? Do you think using @eval might be faster for a large system?

isaacsas avatar Nov 30 '20 01:11 isaacsas

That most likely doesn't have a major effect.

ChrisRackauckas avatar Nov 30 '20 07:11 ChrisRackauckas

Since creating variables with a given name is now much easier, just

a = Symbol("varname")
a5 = (@variables $(a)[5])[1]

which gives

varname[5]

for the value of a5, I think the original issue has been resolved. I've therefore renamed the issue to deal more with the other problem (handling very large numbers of reactions).

isaacsas avatar Aug 19 '21 20:08 isaacsas