ODE.jl
ODE.jl copied to clipboard
[WIP] Adding iterators
Hi,
I have been playing around with iterator version of oderkf and came up with my own implementation. My idea was what follows: make oderkf(F,x0,tspan::Vector) behave as previously and add oderkf(F,x0,tstart::Number) which creates an iterator. I reimplemented oderkf(F,x0,tspan::Vector) using the iterator version of oderkf, so that both implementations are not redundant. I also made changes to the keyword arguments accepted by oderkf
- New keyword arguments:
minstep,maxstep,initstep, I decided thathminandhmaxfrom the previous implementation were already arbitrary, so why not allow the user to define his own problem specific values? In the previous implementationhminandhmaxwere computed from maximal and minimal time, for iterator this is impossible because, in principle, we might not now the maximal time. Therefore, I set the default value ofminsteptoeps(T)^(1/3), whereTis the type oftstart; the default value forinitstepisminstepwhilemaxstep=1/minstep. Alternatively we could setmaxstep=Inf, but all these values are a matter for a discussion. - I added a
tstop=Infoptional argument to the iterator version ofoderkf, naturallyoderkfstops the integration exactly whent=tstopis reached. - I changed the reaction for
abs(h)<abs(hmin), now it produces a warining and does not outputx(onlytandh). Printingxis rather annoying whenxis a large vector (the value ofxis the last step beforeabs(h)<abs(hmin)so it is returned by the solver anyway). - With
tstop=Infit is possible to integrate the system ad infinitum, so I had to add a test for finiteness of the resuts.
The implementation of the non-iterator version of oderkf is simply
function oderkf(F,x0,tspan::Vector,coefs...;args...)
t0 = tspan[1]
tstop = tspan[end]
tout = Array(typeof(t0),0)
xout = Array(typeof(x0),0)
for (t,x) in oderkf(F,x0,t0,coefs...;args...,tstop=tstop)
push!(tout,t)
push!(xout,x)
end
return (tout,xout)
end
You can also use collect(ode45((t,x)->x, 1., 0., tstop = 10)).
You can now do something similar to what @ivarne had in mind in issue #27 :
ode = ode45((t,x)->x, 1., 0.)
for (t,y) in ode
if y > 10
@show (t,y)
break
end
end
With iterators we would be one step closer to implementing root finding for solutions to ODE's. I am curious if iterators could get along with dense output #40 #44 .
PS I know that API is not fixed yet, but I needed iterators for my own purposes and decided to share this draft with you.
Cheers, Paweł
Coverage decreased (-1.88%) when pulling 7d6265b8ab324d7fbed3334652792519587883dd on pwl:master into 568c21332d32a301901e9ba4719deb3d417d9dff on JuliaLang:master.
This seems to be a hot season for PRs, any thoughts on iterator versions of solvers? I am willing to rebase this PR to the current master if anyone is interested in merging it.
I love this API idea, so I say do it!
In the interest of keeping the API consistent, we probably want to make this style possible in other solvers, but if that seems like a daunting task there's no problem in splitting that into more PR's. Also, note that @acroy implemented the keywords for minstep etc (including a nice estimator for the initial step) in #59 so you might want to base on that branch to avoid conflicts.
This would probably also be interesting for creating a better solution to timholy/ProgressMeter.jl#15, so there are real use cases.
Ok, I think I will wait until #59 is merged and rework this PR after that.
Now that both #59 and #61 are merged, this is probably a good time for this too, if you're up for it :)
OTOH it might make sense to wait for @mauro3's RK implementation. Otherwise the iterator version will also count as "derived" from oderkf.
Good point.
I agree with @acroy, I will wait and build on top of @mauro3's work.
The pressure is on...
OK, so I want to help build the iterator version, because I really need it at this point. Are the any outstanding issues with @mauro3's PR or can we merge that and move on? :)
From my side, it's ready to be merged.
An alternative to your problem in #71 could also be to introduce a poststep function which gets executed after each step. This could be useful for diagnostics and such as well. Less general than iterators but probably a lot easier to program.
It's not just for the normalization problem that I need it. It is also useful for visualizing the convergence, so that I can see for example when the steady state is reached.
If @mauro3 PR is merged in time I will be able to take a look at the iterators this weekend and rework them to fit the new solvers.
That would be great! Please, let me know how I can be of assistance.
I guess poststep is a simple example of an event function and would be very useful. The question is how to realize the callback?
@pwl any progress on implementing the iterator version on top of @mauro3 's new RK code? :)
@berceanu Sorry for not keeping you up to date. I keep this issue at the back of my head constantly but the last week was pretty busy and currently I am at a workshop (which lasts until Friday). I gave the code a look some time ago and implementing iterators will take considerably more time than I anticipated. The reason is I think it is necessary to unify fixed step and adaptive methods to avoid wasting resources into supporting two separate iterators, and this would be a major rework of the existing source code. From what portions of code I understand it seems possible, but as I said, it might take me some time to dig into the algorithms.
If you are in a hurry and you need the iterators right now feel free to use my fork, I have been using this implementation for quite some time and it works flawlessly (although some new functionality like points=:specified or custom types are not implemented there). The API is described at the beginning of this PR. Also, I don't want to hold you back, so you could give this issue a shot yourself.
One thing I noted yesterday using DASSL is that the error message for an error in the objective function did not report in which function the error happened and was thus pretty much useless. I suspect because of the task-based iterator. Is there a way around this? Or is there at least an issue in Julia tracking this?
Certainly, that's caused by the coroutines implementation of iterators in DASSL. I don't know if there is any way to fix this, I started a "discussion" on a Google groups [1] but there wasn't much interest in it. I chose to implement iterators via coroutines for DASSL because it was the least invasive method (essentially I just added one produce() statement to the main function) with practically no downsides, apart from the error reporting issues. So coroutines provide a sort of poor man's iterators.
I think that for ODE we should have a proper iterator implemented via specialized start, next and stop routines, that's why it might take a little bit longer to get it right.
[1] https://groups.google.com/forum/#!topic/julia-users/R9yILEz_uno
Thanks for the clarification (I think you should open an issue with the test case you posted to julia-users). Doing it the other way will need some careful designing of the state variable, but that is probably worth it in the long run.
Could use or take inspiration from: https://github.com/spencerlyon2/IterationManagers.jl
I re-implemented the iterators basing on the current master, that is I took into account the hue reimplementation of solvers by @mauro3 in PR #68. I tried my best to leave the algorithms intact but this is an initial version and there will be some bugs. I also cut out some of the stuff, which I plan to add gradually back but first I wanted to hear out some opinions. In particular I removed the negative direction of integration (this can be easily added as a wrapper to the current iterator protocol) and I removed most of the stuff regarding types, so at this point iterators probably don't support custom types. On the positive side, the dense output is still supported (as a wrapper to the actual solver)! To construct an iterator (based on e.g. bt_feuler) you can call the ODE.solver function
sol = ODE.solver(f,y0,t0;
method = ODE.bt_feuler,
reltol=1e-6,
points=:specified,
tspan=(t0.+[1,2,3,4]))
The ODE.solver integrates both the adaptive and fixed step methods and switches between them depending on what you pass to the method keyword.
In the end we can use the new iterators as usual:
for (t,y) in sol
@show (t,y)
if t > 3
break
end
end
Once we fix the interface I could work on the removed features (arbitrary scalar-like types and reverse time integration).
As of yet, there is no standardized way of collecting the results but there has been a recent discussion of what type of result we expect from a solver so I decided to wait until that issue is resolved.
And of course tests don't pass at this point.
EDIT: I removed the call method for the tableaus and the need to use DenseProblem explicitly, now all you need is ODE.solver with the method parameter and all the dense output parameters like points=:all/:specified and tspan.
Another update, I added the reverse time integration and some simple root finding option (via already existing Hermite interpolation in dense output). The root finding uses a very simple bisection algorithm. Here is an example summarizing the new features
import ODE
f(t,x)=[x[2],-x[1]]
y0 = [0.0, 1.0]
t0 = 0.0
tspan = -collect(1:5)
stopevent(t,y) = y[1] > 0.0
prob = ODE.solver(f,y0,t0;
reltol = 1e-7,
abstol = 1e-7,
tstop = -Inf,
method = ODE.bt_feuler,
tspan = tspan,
points = :specified,
stopevent = stopevent,
roottol = 1e-10)
for (t,y) in prob
@show (t,y)
end
and the result is
(t,y) = (0.0,[0.0,1.0])
(t,y) = (-1.0,[-0.8414842895974096,0.5403108491723219])
(t,y) = (-2.0,[-0.909326182080129,-0.4161599958706816])
(t,y) = (-3.0,[-0.14112670311715536,-0.9900394570025749])
(t,y) = (-3.141592654651042,[3.9867758613623664e-11,-1.0000496742146774])
The tspan together with :specified ensure that the results will be returned at times t=-1,-2,-3,-4,-5 (plus the initial time t=0, we should decide whether this is the default behavior) but before we reach time t=-4 the root finder triggers and finds the root (the function stopevent returns true). The tolerance for the root finder is given by roottol, we find that the root is t=-pi+/-10e-9 and this is of course true because the solution is y[1]=sin(t). There are still some rough edges, for example the tstop and tspan are somewhat redundant. Apart from that the support for scalar-like types is not yet there.
And there is a bonus as well: without any modifications we can use the algorithms from DASSL! (although we need a wrapper to conform to the API of DASSL)
using DASSL
using Iterators
dassl_wrapper(f,y0,t0;kargs...)=imap(x->(x[1],x[2]),
dasslIterator((t,y,dy)->f(t,y)-dy, y0, t0; kargs...))
prob = ODE.solver(f,y0,t0;
(...)
method = dassl_wrapper,
(...)
@tlycken @mauro3 would you mind taking a look at my implementation of the iterators? Holiday season is coming so I might have some time to add the tests and optimize it a bit. It could also be an opportunity to modify the output according to #82 or #80.
Sorry to not have looked at this yet! I'll try over the next few days, but I suspect I may well not be able to give much input until mid-January.
I took a quick look, and although I won't have time to prod deeper, at least on the surface it looks quite good to me.
Do we have usage docs for this library somewhere, that should be updated?
I had a brief look over it: looks good. A few comments in-line, probably redundant as this is still WIP.
To test performance, maybe use https://github.com/mauro3/IVPTestSuite.jl: Threebody and HIRES should work for the explicit solvers. (I should get that package cleaned up to be more user friendly and more directly integrated...)
Random thoughts/questions:
- would multi step solvers fit into this scheme?
- would this integrate with the Sundials.jl solvers as well? It would be cool to eventually be able to call those through ODE.jl.
- could the
startanddonebe generic and a particular class of solvers would only need to implementnext? - This design assumes that dense output can be attached to the solver afterwards. I think, for more sophisticated (higher order) dense output, the solvers and the dense-output are often more tightly coupled. Not something to implement now but something to ponder.
- there seems to be some overlap between the types
DenseProblemandProblem, maybe could be refactored? - eventually, I would like to see in-place functions to be used in the solvers (see e.g. #72). Would this work with this?
I'll look at it in more detail next year... thanks for the work!
Actually, above points about "sophisticated dense output" and multistep methods may need the same solution: a state which holds more than just the last output but either several or intermediate steps. Pawel, as your DASSL code is multistep, maybe implementing that within this framework would lead to valuable insights on a general design (as well as more work!)?
Hi, I made some major updates to the PR and refactored the code loosely basing on the ideas by @mauro3. The changes include
- Support for the legacy(?) interfaces like ODE.odeXX, so most of the tests pass (but not all, there might be something wrong with the adaptive steppers).
- Complete rewrite of the lower level structures, we now have
ExplicitODEto hold the ode data,Optionsto keep all the options in one place and anAbstractStepperwith the algorithms and the associated iterator structure. Everything is held together by theSolution{AbstractStepper}type. Again, the names can be changed if their meaning is not clear enough. - Given
ode=ExplicitODE(...)andstepper=SomeStepper(...)one can construct an iterator by callingsol=solve(ode, stepper; kargs...) - The modified Rosenbrock method ode23s is now supported
- The dense output wrapper is now a
DenseStepperand can be used as any other stepper, to get the dense output usedense(sol)withsolas above. The dense stepper takes its options from theOptionscomponent of theSolution - All implemented algorithms support the in-place formulation (i.e. I switched from
dy=F(t,y)toF!(t,y,dy)). In fact this is the default way of defining an ODE now and the higher level interface convertsFtoF!, and does the same for the Jacobian. - Removed the ugly implementation of
callfor the Tableau.
Future plans
- Implement the multistep methods (I think this is the only remaining unimplemented big chunk of the existing code).
- Add tests for the lower level interface and the iterator protocol.
- Incorporate DASSL?
- Change the default output type of the higher level interface to an array of tuples
[(t_n,y_n)]instead of the current tuple of arrays([t_n],[y_n]), this way we could get rid of the useless conversions of the results coming from the iterator. - So far all the dense output uses a hard-coded 3rd order interpolation, as @mauro3 suggested some steppers could provide a more accurate and efficient implementations.
- Change the top-level interface so that it can accept the in place
F!andjac!? Maybe haveODE.odeXX!(...)andODE.odeXX(...).