lecture-jax
lecture-jax copied to clipboard
Opt Savings: Use Vectorized function instead of broadcasting
Deploy Preview for incomparable-parfait-2417f8 ready!
| Name | Link |
|---|---|
| Latest commit | 536e25e0ef3cc74eb32bef218038860727744116 |
| Latest deploy log | https://app.netlify.com/sites/incomparable-parfait-2417f8/deploys/64cc9cb50ed3a9000820b818 |
| Deploy Preview | https://deploy-preview-88--incomparable-parfait-2417f8.netlify.app |
| Preview on mobile | Toggle QR Code...Use your smartphone camera to open QR code link. |
To edit notification comments on pull requests, go to your Netlify site configuration.
🚀 Deployed on https://64cc9f84d144565dcaf62119--incomparable-parfait-2417f8.netlify.app
Hi @jstac,
I completed using vmap everywhere possible instead of reshape. The results on the PR are:
HPI completed in 0.03335309028625488 seconds.
VFI(jax not in succ) completed in 1.0069973468780518 seconds.
OPI completed in 0.3129911422729492 seconds.
And on the published lecture: https://jax.quantecon.org/opt_savings.html
HPI completed in 0.03399658203125 seconds.
VFI(jax not in succ) completed in 0.90895676612854 seconds.
OPI completed in 0.3122248649597168 seconds.
So, not much difference between HPI and OPI, but VFI got is about 11% slower on this branch.
Many thanks @Smit-create , this is very nicely done.
But in the interests of readability, would it be possible to have just two functions, B and B_vec, where B gets vectorized by vmap? I prefer it that B gives a very clear and readable description of the right hand side of the Bellman equation, and then all of the vectorization takes place on the outside to create B_vec.
This requires removing the intermediate functions like compute_c. I'm not sure if it's possible.
Many thanks for the review @jstac. That's a good idea. I'll try to look into that in a new PR so that we can compare this and the new one.
Already implemented