Create a getting started section and add a new linear regression example
This PR -
- Adds a new "Getting Started" section in the docs which I will be working on for the next couple of weeks
- Moves
basics.mdandoverview.mdto the "Getting Started" section. Currently, there are no changes in the files, but they will be changed soon. - Adds a Linear Regression example to the "Getting Started" section
For the linear regression guide -
- Should I add the backpropagation step in the first half, or should I leave it to the
Flux.Optimise.update!step? - This guide overlaps with a lot of other textual information present but scattered in Flux's docs. These other texts will also be updated or moved to a better place soon. Basics.md, for example, does something very similar with dummy data, and the current "Getting Started" guide does something similar but with pre-defined weights.
PR Checklist
- [ ] Tests are added
- [ ] Entry in NEWS.md
- [ ] Documentation, if applicable
Should we consider having a "getting started" on the website in addition to the docs?
On Wed, Jul 6, 2022, 01:53 Saransh @.***> wrote:
This PR -
- Adds a new "Getting Started" section in the docs which I will be working on for the next couple of weeks
- Moves basics.md and overview.md to the "Getting Started" section. Currently, there are no changes in the files, but they will be changed soon.
- Adds a Linear Regression example to the "Getting Started" section
For the linear regression guide -
- Should I add the backpropagation step in the first half, or should I leave it to the Flux.Optimise.update! step?
- This guide overlaps with a lot of other textual information present but scattered in Flux's docs. These other texts will also be updated or moved to a better place soon. Basics.md, for example, does something very similar with dummy data, and the current "Getting Started" guide does something similar but with pre-defined weights.
PR Checklist
- Tests are added
- Entry in NEWS.md
- Documentation, if applicable
You can view, comment on, or merge this pull request online at:
https://github.com/FluxML/Flux.jl/pull/2016 Commit Summary
- c52dc2c https://github.com/FluxML/Flux.jl/pull/2016/commits/c52dc2c47e080d88e70bbd9517a8c61ffd26f6f9 Create a getting started section and add a new linear regression example
File Changes
(9 files https://github.com/FluxML/Flux.jl/pull/2016/files)
- M docs/Project.toml https://github.com/FluxML/Flux.jl/pull/2016/files#diff-6e0adb5e01dce01db395f52201532ad8fec022922489f34827886053953ce93f (3)
- M docs/make.jl https://github.com/FluxML/Flux.jl/pull/2016/files#diff-4aae2d1c783cade58bd2cb13748da956e568b7f2aed5fafd9e2a46fb97daf613 (11)
- R docs/src/getting_started/basics.md https://github.com/FluxML/Flux.jl/pull/2016/files#diff-4224422c0fa149eeb1d03330e17a6a2029d129db92bed998c99986f034dfc4fb (2)
- A docs/src/getting_started/linear_regression.md https://github.com/FluxML/Flux.jl/pull/2016/files#diff-fba702b6c13d725a8429905593e78fdf3d263cbde6f5c769af63ee1a20fd8424 (496)
- R docs/src/getting_started/overview.md https://github.com/FluxML/Flux.jl/pull/2016/files#diff-f33b3f58a594c426aed1cddeaabc6569c071cdee3799b15fdf52a278142eeb66 (0)
- M docs/src/gpu.md https://github.com/FluxML/Flux.jl/pull/2016/files#diff-813fc68aeb136c3af32fae13632adf27c24229ce9b56b151d13b836db8a7aa2d (2)
- M docs/src/models/advanced.md https://github.com/FluxML/Flux.jl/pull/2016/files#diff-01d4481e127f0cef79438231dbde6fca65d618f29070e0f2835ed6a7f8b13df7 (2)
- M docs/src/training/optimisers.md https://github.com/FluxML/Flux.jl/pull/2016/files#diff-e9c792a4e565cbb6cd7f78a85ab25ad5bb66a6fc0f1e4d49ab39ac821dd69b4c (2)
- M docs/src/training/training.md https://github.com/FluxML/Flux.jl/pull/2016/files#diff-791e8b024a9ce7e7f89b45b7582d628d3d8d55f0bb5e17c39f8a50bd6aa21aea (6)
Patch Links:
- https://github.com/FluxML/Flux.jl/pull/2016.patch
- https://github.com/FluxML/Flux.jl/pull/2016.diff
— Reply to this email directly, view it on GitHub https://github.com/FluxML/Flux.jl/pull/2016, or unsubscribe https://github.com/notifications/unsubscribe-auth/AJOZVVPWCTKUTEJDXS5J7ILVSSKTHANCNFSM52XLUU3A . You are receiving this because you are subscribed to this thread.Message ID: @.***>
The first thought I had was to add a hyperlink on the website's Getting Started page, which would redirect the user to the docs' Getting Started page (similar to the Ecosystem page).
Should I also add the same tutorial (once approved) to the website, or should we link the section?
I think there is a consensus to not use Flux.params API any longer. We should not introduce it into new tutorials being written.
Thank you for the suggestion! I took some time to go through Optimisers.jl, and I am assuming that the traditional Flux training method should be replaced with Optimisers.jl?
Agree we should move away from the whole weird Params story.
But perhaps this linear regression example should delay introducing Optimisers.jl as long as possible. For simple gradient descent, just writing out something like this:
dLdW, _, _ = gradient(loss, W, x, y)
W .= W .- 0.1 .* dLdW
might be a better level than immediately introducing optimiser state etc. It's unfortunate that this has quite a few moving parts, unlike train!'s apparent simplicity (although really train! hides a lot & this is also confusing). Maybe Optimisers.jl ought to be introduced along with an explanation that adding momentum helps, so that you know why there is a state?
Even something like this seems OK to me, pretty explicit, and makes you understand why you are about to see a tool for walking over the arrays:
m = Dense(1 => 1)
for step in 1:10
dLdm, _, _ = gradient(loss, m, x, y)
m.weight .= m.weight .- 0.1 .* dLdm.weight
m.bias .= m.bias .- 0.1 .* dLdm.bias
end
One more comment. In such tutorials, things like params = Flux.params(W, b) seem super-confusing. It would be nice to choose variable names which are very clearly things you've chosen, not features of Flux. flux_model is good.
(Link to rendered version: https://github.com/Saransh-cpp/Flux.jl/blob/linear-regression/docs/src/getting_started/linear_regression.md )
Agreed, we should keep the linear regression example simple. I will update the tutorial to show the gradient descent algorithm in action.
I have been working on the logistic regression example locally and will update that with the same.
I'll update this with the new train! definition once #2029 is merged. Right now this PR does not use train! in any way.
Test: @ModelZookeeper commands
Thanks for the review, @mcabbott! Pluto sounds good! We can get rid of the copy-paste section if this is converted to a Pluto notebook (or file, not sure, will go through it in detail).
(SciML uses this copy-paste section at the top, but this code was too lengthy to be placed at the top)
Edit: A discussion about the documentation of Metalhead is going on at https://github.com/FluxML/Metalhead.jl/pull/199, which could result in a uniform template for these getting started/quickstart guides.
@mcabbott, JuliaManifolds/Manopt.jl renders Pluto notebooks in the Documenter documentation using the following make.jl contents - https://github.com/JuliaManifolds/Manopt.jl/blob/master/docs/make.jl#L1-L106.
This does look a bit hacky, and it also distorts the documentation when a user opens one of the rendered Pluto notebooks -
Sidebar
Normal page

Rendered Pluto notebook

Settings
Normal page

Rendered Pluto notebook

Is there a legitimate way to render Pluto notebooks in Documenter's documentation? This hack does not look good to me. Alternatively, we could keep the section as it is and at the top link a Pluto notebook that can be downloaded for following along.
Here is how the converted Pluto notebook looks like - https://saransh-cpp.github.io/assets/pluto/linear_regression.jl.html
Note: I am just redirecting users to the html page generated by Pluto from here - https://saransh-cpp.github.io/blog/ (this will be removed once this PR is merged to avoid duplicate pages on the web).
Currently, https://fluxml.ai/Flux.jl/stable/models/overview/ does something similar but is not as extensive as this guide. Should it be removed, or should it also be put under the "Getting Started" guide with a better title?
What if anything do we lose if it's removed? I think it would be nice to do so, but any material in it which isn't covered elsewhere would need a new home.
I think the following text can be used at the top of this guide. The rest of this page is definitely a subset of this guide.
Flux is a pure Julia ML stack that allows you to build predictive models. Here are the steps for a typical Flux program:
- Provide training and test data
- Build a model with configurable parameters to make predictions
- Iteratively train the model by tweaking the parameters to improve predictions
- Verify your model
Under the hood, Flux uses a technique called automatic differentiation to take gradients that help improve predictions. Flux is also fully written in Julia so you can easily replace any layer of Flux with your own code to improve your understanding or satisfy special requirements.
Here's how you'd use Flux to build and train the most basic of models, step by step.
Currently, https://fluxml.ai/Flux.jl/stable/models/overview/ does something similar but is not as extensive as this guide. Should it be removed, or
One virtue of that page is that it's much shorter.
I like this PR's story, it's a nice ground-up explanation. But if you have met some of this before, and want a faster path to seeing how to write it in Julia's syntax and using Flux's pieces, then perhaps you prefer the older one.
Not sure what the ideal way to organise this material is...
But if you have met some of this before, and want a faster path to seeing how to write it in Julia's syntax and using Flux's pieces, then perhaps you prefer the older one.
Yes, this makes sense. Maybe converting it into a "Quickstart" page under the "Getting Started" section?
In addition to the main manual
https://fluxml.ai/Flux.jl/stable/models/overview/
we also have some quite nice tutorials here:
https://fluxml.ai/tutorials.html
How do we make these all findable, and where does this new page go?
The main docs also seem a slightly awkward combination of introduction and reference. It's possible that the "Building models" heading should be split in two? Not sure.
In addition to the main manual
https://fluxml.ai/Flux.jl/stable/models/overview/
we also have some quite nice tutorials here:
https://fluxml.ai/tutorials.html
How do we make these all findable, and where does this new page go?
While drafting the "Getting Started" section, I wanted to include only the guides that will get a user started with flux. The website tutorials should be the ones introducing something that a user doesn't find themselves engaged with when they begin with ML/DL or an ML/DL package, for example, GANs and Transfer Learning. I think the DataLoader tutorial should be moved to the MLUtils page and the Deep Learning with Flux - A 60 Minute Blitz (September 2020) can be added to the "Getting Started" section.
Ideally, there should be a "Tutorials" heading on the docs' sidebar which should redirect users to the website's tutorials page. Similarly, the website's getting started page should redirect users to the docs' getting started section.
A nice infographic -

IMO, the DataLoader example is a "How-To Guide", the "Getting Started" guides are more inclined towards "Explanations", and the website tutorials are "Tutorials".
I was looking at Turing's documentation a bit. They clearly face the same problem, that there are many levels at which you can introduce something.
One thing I do like is that they have a compact example which basically does what the library does, up front, in one code block. Anyone can copy and run this in one go. Experts who know another package / language can see roughly what the notation is etc. Beginners will be mystified as to how it's working, but at least you get a plot which shows something worked.
My first attempt at such an example is here: https://gist.github.com/mcabbott/a61930793c5a064e10411ec427a6377a . Whether this ought to be instead of, or in addition to, existing text I don't know.
One thing I do like is that they have a compact example which basically does what the library does, up front, in one code block. Anyone can copy and run this in one go. Experts who know another package / language can see roughly what the notation is etc. Beginners will be mystified as to how it's working, but at least you get a plot which shows something worked.
Yes! These are the how-to guides. I find them extremely helpful as well!
I am not sure where these small tutorials should be added?
Hmm, where should this tutorial go now? We do have a new Getting Started section, so should this go in there (along with other tutorials)?
Good question. PyTorch and JAX have tutorial sections for this, but at present we've outsourced that to the site. Maybe we could add one and pull a couple of website tutorials in for a later PR (but link to them for now)? That would also help prevent bitrot of the tutorials on the site.
So would that mean shifting the tutorials section to the Documenter website completely? Maybe I can shift the existing section before adding this guide to the same. We could still keep the website navbar link, but instead, link it to the new section in the Documenter site (like the ecosystem page)?
I like that idea. It might be easier to create that section in this PR and do a follow-up for migrating the other tutorials.
This PR now creates a new "tutorials" section. I will migrate the website's tutorials here in the follow-up PRs.
Note that there's already a tutorials section (on master) so simplest to add this there?
Note that there's already a tutorials section (on master) so simplest to add this there?
Ah, I missed it because of how low it is in the sidebar. Maybe we should move it above the "Performance Tips" section?