Flux.jl icon indicating copy to clipboard operation
Flux.jl copied to clipboard

Update overview.md

Open mattiasvillani opened this issue 2 years ago • 7 comments

Rewrote parts of the text to make it more clear and streamlined. See if you like it.

PR Checklist

  • [ ] Tests are added
  • [ ] Entry in NEWS.md
  • [x] Documentation, if applicable
  • [ ] API changes require approval from a committer (different from the author, if applicable)

mattiasvillani avatar Aug 02 '21 20:08 mattiasvillani

Might also be helpful to skim #1579 to see some of the motivation behind the original tutorial.

darsnack avatar Aug 03 '21 16:08 darsnack

I have revised the PR now to reflect all the nice input from @ToucheSir @darsnack and @DhairyaLGandhi. I am new to opening up PRs on Docs and to have comments like this (but I have 20+ years of teaching experience as professor, partly in ML), and I am unsure on how to proceed. Perhaps easier to close this PR and open a new one with revised version? Please advice.

mattiasvillani avatar Aug 03 '21 22:08 mattiasvillani

How did you create this PR? Did you use the GitHub web interface? Knowing the tool will help us know how best to proceed.

darsnack avatar Aug 03 '21 22:08 darsnack

Yes. But my current version is locally on the fork created by the GitHub web version.

On Wed, Aug 4, 2021, 00:09 Kyle Daruwalla @.***> wrote:

How did you create this PR? Did you use the GitHub web interface?

— You are receiving this because you authored the thread. Reply to this email directly, view it on GitHub https://github.com/FluxML/Flux.jl/pull/1687#issuecomment-892199365, or unsubscribe https://github.com/notifications/unsubscribe-auth/AAJ2SHZ6XYWUTPNKKD2TYFTT3BSLFANCNFSM5BNNWZRQ . Triage notifications on the go with GitHub Mobile for iOS https://apps.apple.com/app/apple-store/id1477376905?ct=notification-email&mt=8&pt=524675 or Android https://play.google.com/store/apps/details?id=com.github.android&utm_campaign=notification-email .

mattiasvillani avatar Aug 03 '21 22:08 mattiasvillani

You can continue to use the Github web interface to edit files. Just make sure you are editing your local fork and the patch-1 branch (link here: https://github.com/mattiasvillani/Flux.jl/tree/patch-1). Edit any file and when you want to save the changes, you make a commit directly to patch-1. The PR will automatically incorporate those new commits.

As for dealing with the comments, I would just refer to the comments and make the changes you think are necessary manually by editing your fork's branch like described above. Don't bother trying to "commit the suggestions" in the PR web view.

darsnack avatar Aug 03 '21 22:08 darsnack

Ok, thanks. I just pushed to patch-1 in my forked repo, and the PR was updated. And thanks for the tip about dealing with comments. Makes sense and will use that in the future.

Mattias

On Wed, Aug 4, 2021 at 12:34 AM Kyle Daruwalla @.***> wrote:

You can continue to use the Github web interface to edit files. Just make sure you are editing your local fork and the patch-1 branch (link here: https://github.com/mattiasvillani/Flux.jl/tree/patch-1). Edit any file and when you want to save the changes, you make a commit directly to patch-1. The PR will automatically incorporate those new commits.

As for dealing with the comments, I would just refer to the comments and make the changes you think are necessary manually by editing your fork's branch like described above. Don't bother trying to "commit the suggestions" in the PR web view.

— You are receiving this because you authored the thread. Reply to this email directly, view it on GitHub https://github.com/FluxML/Flux.jl/pull/1687#issuecomment-892210096, or unsubscribe https://github.com/notifications/unsubscribe-auth/AAJ2SH6KTS7DVTHLC5677HTT3BVGFANCNFSM5BNNWZRQ . Triage notifications on the go with GitHub Mobile for iOS https://apps.apple.com/app/apple-store/id1477376905?ct=notification-email&mt=8&pt=524675 or Android https://play.google.com/store/apps/details?id=com.github.android&utm_campaign=notification-email .

-- Mattias Villani, Professor

Department of Statistics Stockholm University e-mail: @.***

Dept. of Computer and Information Science Linköping University e-mail: @.***

Web: https://mattiasvillani.com http://mattiasvillani.com/

mattiasvillani avatar Aug 03 '21 22:08 mattiasvillani

Thanks for the comments. I commited a new revised version where the summary is modified, partially following the suggestions from @.**** Let me know if I need to do anything else for now.

Mattias

On Wed, Aug 4, 2021 at 1:03 AM Kyle Daruwalla @.***> wrote:

@.**** requested changes on this pull request.

I do like the new flow, especially the new titles. I think the verbosity of the summary should be somewhere between the original and what's in the current version. Certain paragraphs are probably too redundant, but I do think a few could be helpful (I'm primarily thinking of when I've taught students with no CS/ML/stats background).

In docs/src/models/overview.md https://github.com/FluxML/Flux.jl/pull/1687#discussion_r682151602:

1×6 Matrix{Float32}:

0.0 -1.4925 -2.98501 -4.47751 -5.97001 -7.46252




-In order to make better predictions, you'll need to provide a *loss function* to tell Flux how to objectively *evaluate* the quality of a prediction. Loss functions compute the cumulative distance between actual values and predictions.

+## Step 3 - Learn the model parameters on training data

+

+In order to make better predictions, you'll need to provide a *loss function* to tell Flux how to objectively *evaluate* the quality of a prediction. Loss functions compute the average distance between actual values and predictions so that more accurate predictions will yield a lower loss. Here is an example with the [mean squared error](https://www.statisticshowto.com/probability-and-statistics/statistics-definitions/mean-squared-error/) loss function:


⬇️ Suggested change

-In order to make better predictions, you'll need to provide a *loss function* to tell Flux how to objectively *evaluate* the quality of a prediction. Loss functions compute the average distance between actual values and predictions so that more accurate predictions will yield a lower loss. Here is an example with the [mean squared error](https://www.statisticshowto.com/probability-and-statistics/statistics-definitions/mean-squared-error/) loss function:

+In order to make better predictions, you'll need to provide a *loss function* to tell Flux how to objectively *evaluate* the quality of a prediction. Generally, loss functions compute the average distance between actual values and predictions so that more accurate predictions will yield a lower loss. Here is an example with the [mean squared error](https://www.statisticshowto.com/probability-and-statistics/statistics-definitions/mean-squared-error/) (MSE) loss function:


------------------------------

In docs/src/models/overview.md
<https://github.com/FluxML/Flux.jl/pull/1687#discussion_r682155106>:

> @@ -181,14 +153,11 @@ julia> y_test

 26  30  34  38  42

-The predictions are good. Here's how we got there.

-First, we gathered real-world data into the variables x_train, y_train, x_test, and y_test. The x_* data defines inputs, and the y_* data defines outputs. The *_train data is for training the model, and the *_test data is for verifying the model. Our data was based on the function 4x + 2.

I still feel that this particular paragraph can be helpful reinforcement for someone who has never seen machine learning. I'd imagine a student would read this and think "ok, I always need x_train, y_train, x_test, and y_test."

In docs/src/models/overview.md https://github.com/FluxML/Flux.jl/pull/1687#discussion_r682155995:

-The first parameter is the weight and the second is the bias. Flux will adjust predictions by iteratively changing these parameters according to the optimizer.

-This optimiser implements the classic gradient descent strategy. Now improve the parameters of the model with a call to @.***) like this:

+The @.***) function in Flux will use the optimizer to iteratively adjust the weights and biases in parameters to minimize the loss function. Calling train! once makes the optimizer take a single step toward the minimum:

⬇️ Suggested change

-The @.***) function in Flux will use the optimizer to iteratively adjust the weights and biases in parameters to minimize the loss function. Calling train! once makes the optimizer take a single step toward the minimum:

+The @.***) function in Flux will use the optimizer to iteratively adjust the weights and biases in parameters to minimize the loss function. Calling train! once makes the optimizer take a single step toward the minimum of our loss function:

— You are receiving this because you authored the thread. Reply to this email directly, view it on GitHub https://github.com/FluxML/Flux.jl/pull/1687#pullrequestreview-721706007, or unsubscribe https://github.com/notifications/unsubscribe-auth/AAJ2SHZDCIE5UB3ZS3CZDS3T3BYV5ANCNFSM5BNNWZRQ . Triage notifications on the go with GitHub Mobile for iOS https://apps.apple.com/app/apple-store/id1477376905?ct=notification-email&mt=8&pt=524675 or Android https://play.google.com/store/apps/details?id=com.github.android&utm_campaign=notification-email .

-- Mattias Villani, Professor

Department of Statistics Stockholm University e-mail: @.***

Dept. of Computer and Information Science Linköping University e-mail: @.***

Web: https://mattiasvillani.com http://mattiasvillani.com/

mattiasvillani avatar Aug 04 '21 21:08 mattiasvillani