cvx_short_course
cvx_short_course copied to clipboard
Jupyterbook anyone?
I have played with Jupyter Books which are imho somewhat nicer than Sphinx. Have a look please. Also playing with your portfolio optimization in applications I have noticed that the construction
cp.sum_squares(np.transpose(np.linalg.cholesky(cov) @ w) is a lot faster than cp.quad form(w, cov)
I am a bit puzzled here... @phschiele please comment.
The book you can build via
jupyter-book clean book
jupyter-book build book
Book is now book/_build/html. Push to a server you control...
Hi @tschm, thank you for this contribution! I was able to build the book with your instructions above, and it looks great!
Here is a preview:
There are some external dependencies here, e.g., the short course homepage points directly to paths on GitHub, so moving the notebooks would break these links. One option would be to host the HTML output on the GitHub pages web server of this repo and update the links on the short course website accordingly. A simple GitHub action can push the book to the gh-pages
branch.
We'd have to check with the stakeholders of the short course (e.g., @stephenpboyd, @SteveDiamond).
Re your question about the portfolio optimization example:
If you profile the example, e.g. using snakeviz
, you'll find that most time is spent in is_psd_within_tol()
(Due to caching, it looks like it is computed three times (3*1.31 > 1.49), but it's only computed once.)
The implicit PSD check in your version, i.e. computing the Cholesky factorization, is much faster but does not allow for the required tolerances. If you know that P
in quad_form
is PSD, e.g. because you checked beforehand, or because it is so by construction (like covariance matrices), @SteveDiamond added an assume_PSD
argument in https://github.com/cvxpy/cvxpy/pull/1818. This feature will be available in CVXPY 1.3.
Thank you @phschiele. The book is still somewhat incomplete. The mbox is displayed in somewhat ugly font. Yes, it would be easy to link it to GitHub actions. Please note that all notebooks are executed (cache is an option here) and hence one could catch problems. Interesting insights on quad_form (thank you!) I also noticed, that not using the variable f and the constraint F.T w == f also results in a mild speed improvement. However, your main point is that the factor approach is much faster than using a full covariance shouldn't be polluted by quad_form issues. Hence I would use the faster Cholesky decomposition.
@phschiele I have to stop working on this for now but I have learnt more details both about Jupyter-Books but also about cvxpy. I have slightly rearranged the content of the repo and revisited the long README file.
This work is not finished though. It would need some integration via Github actions and a home somewhere on a server.
There might be still a few issues with MathJax but they could be ironed out quickly.
@phschiele The approach taken here seems to be of cosmetic nature but there are features worth thinking about it. Any new notebook will pop up without any further required interaction in the TOC and will be linked from the central document. The book writes itself :-) You also have the search functionality across all notebooks and markdown files and you have the links to Binder, colab and a private JupyterHub server you maintain. The build of the book can be automated (through GitHub actions). Of course, one could go much deeper here and I point you to my inspiration: https://python.quantecon.org/intro.html
I have revisited the problem of creating a jupyter book out of the content given in this repo. Let me emphasize that I have not touched the content but I have created the github workflows that take care of creating the book and caching the container used by Binder.
The book is served on https://tschm.github.io/cvx_short_course/docs/index.html. At compile time of the book all notebooks are executed and issues are reported as warnings. It's therefore a good way to control quality in notebooks.
The book also comes with a slick search functionality and you can search across all notebooks etc at once. Binder is integrated and hence students don't have to build a local environment at all.
@stephenpboyd @phschiele @PTNobel @SteveDiamond
The only subtle change on all notebooks: The first cell has to be a Markdown cell with a # header. Otherwise the book would struggle to identify a string for the toc.
It’s awesome!
On Apr 10, 2023, at 3:41 PM, Thomas Schmelzer @.***> wrote:
I have revisited the problem of creating a jupyter book out of the content given in this repo. Let me emphasize that I have not touched the content but I have created the github workflows that take care of creating the book and caching the container used by Binder. The book is served on https://tschm.github.io/cvx_short_course/docs/index.html. At compile time of the book all notebooks are executed and issues are reported as warnings. It's therefore a good way to control quality in notebooks. The book also comes with a slick search functionality and you can search across all notebooks etc at once. Binder is integrated and hence students don't have to build a local environment at all. @stephenpboyd @phschiele @PTNobel @SteveDiamond — Reply to this email directly, view it on GitHub, or unsubscribe. You are receiving this because you were mentioned.Message ID: @.***>
I don't have the powers to assign an official reviewer or to perform a merge request. Anyone keen?
@tschm Thanks for all of the updates! Happy to take a look 🚀
The repo2docker construct is somewhat fragile and may not work at the moment. It's more robust to build an image and publish it on dockerhub. I have opened issues here: https://github.com/jupyterhub/repo2docker-action/issues/99
I am applying the docker hub path in tschm/antarctic
Please note that the construction of the image is not critical though. It reduces the time Binder needs to fire up a server as it can cache an image. However, this effect might be mitigated by many students using binder (and binder hence caching) or by accepting that it could take a while (e.g. 1 minute) to build the image as the cache is empty