QuantEcon.py
QuantEcon.py copied to clipboard
Refactor simulation methods in the IVP module
All,
I would like to re-factor the simulation loops (i.e, _integrate_fixed_trajectory and integrate_variable_trajectory) in the quantecon.ivp module in order to speed them up. However I am not exactly sure how to go about it. Would this be a good use case for Numba? If so are there examples in the code base already that I could use as a guide?
@davidrpugh, have you done any benchmarks to see which parts of these functions are consuming the most time?
I would suspect that numba might not work because you are calling out to a scipy ode solver, but I am not sure about that.
I'm not 100% sure, but it looks like this line is effectively causing the array solution to grow each iteration. Is that true? If so, then I would expect it to be a very inefficient part of the code.
@spencerlyon2
I haven't done any formal benchmarks, but in both integration methods the solution array is growing each iteration. For the fixed length trajectory I am sure that I could allocate the array ahead of time and then populate each row as I iterate through the loop; for the variable length integration it is not obvious to me how I can avoid growing the array.
I think that some benchmarks or line profiling could help so we know exactly where the routines are spending their time.
Also I'd say it would be an improvement if we could pre-allocate in _integrate_fixed_trajectory.
If allocation really is the issue in _integrate_variable_trajectory maybe we could allocate in chunks. This is a rough sketch of the idea:
- start out with an array with space for 50 iterations
- after 50 if we aren't done, reallocate an array with space for 70 (20 more) and fill in the first 50 with data we already computed
- re-evaluate after 70 again and grow the array a bit more.
This might be better, or it might well be worse. We would almost certainly allocate "too much" space in that we would preallocate at some point and converge before that allocation is all used up. This isn't a huge issue as we could simply just trim up the array and only retain data we computed, but then we have wasted some memory in allocating too much.
Again, some benchmarks and profiling would probably help answer these questions.
@davidrpugh I have used memory_profiler before to see what is happening with memory allocation in Python. https://pypi.python.org/pypi/memory_profiler. It is a pretty simple way to check out what memory is getting allocated using a @profile decorator.
For more performance insight ... line_profiler is a neat project which is again decorator based @profile. https://pypi.python.org/pypi/line_profiler
I find these tools useful for having a quick look at the code - but vbench might be a good thing to integrate to understand performance in general over time.
@mmcky
Thanks. I have used the profiler magic commands in IPython before to profile code but that's about it. I will take a look at these tools.
On Wed, Jan 28, 2015 at 11:03 PM, mmcky [email protected] wrote:
@davidrpugh https://github.com/davidrpugh I have used memory_profiler before to see what is happening with memory allocation in Python. https://pypi.python.org/pypi/memory_profiler. It is a pretty simple way to check out what memory is getting allocated using a @profile decorator.
For more performance insight ... line_profiler is a neat project which is again decorator based @profile. https://pypi.python.org/pypi/line_profiler
I find these tools useful for having a quick look at the code - but vbench might be a good thing to integrate to understand performance in general over time.
— Reply to this email directly or view it on GitHub https://github.com/QuantEcon/QuantEcon.py/issues/111#issuecomment-71935594 .
If you need any help getting these set up and going let us know!