Performance Regressions
The odu lecture, for ex., looks like it takes 3X as long to run now than it did with Julia 0.6. What we need to do is:
- [x] Cache/store the build information/runtimes for Julia 0.6.
- [ ] Identify affected lectures.
- [ ] Fix them.
- [ ] (long-term) Add some kind of benchmarking to the build process so we can find these automatically from Travis.
v0.6 data can be found as a JSON here
@XiaojunGuan This might be a good one to chip away at.
As a first step, could you look at the v0.6 data in the pastebin, and the v1.0 data at https://lectures.quantecon.org/status.html, and identify which lectures have slowed down?
Edit: Just to confirm, we only care about the Julia times, and things on the order of a few seconds here or there aren't too worrisome.
Here is a list of lectures that are being affected:
- [ ] dyn_stack
- [ ] egm_policy_iter
- [ ] mccall_model
- [ ] need_for_speed
- [ ] smoothing
We need to be careful about precompiling time. The test should only compare running the notebooks the second time (in both circumstances)
Thanks @XiaojunGuan.
And that's a good point @jlperla --- I'll confirm with Matt.
No, I mean when manually running it... It is already too slow to run twice until we can parallelize the build