Kyle Daruwalla
Kyle Daruwalla
Might also be helpful to skim #1579 to see some of the motivation behind the original tutorial.
How did you create this PR? Did you use the GitHub web interface? Knowing the tool will help us know how best to proceed.
You can continue to use the Github web interface to edit files. Just make sure you are editing your local fork and the `patch-1` branch (link here: https://github.com/mattiasvillani/Flux.jl/tree/patch-1). Edit any...
The request is perfectly okay, but as an aside, isn't your example a use-case of `Parallel`?
`train` is already a vector of batches ([see here](https://github.com/FluxML/model-zoo/blob/52a7b8923ef7f0313b6e38765536166ae1ef7961/tutorials/60-minute-blitz/60-minute-blitz.jl#L313)), so iterating it in a for-loop will do mini-batch SGD. But our mini-batches appear to not be so mini...the batch size...
Bump on this @pevnak. If this is too far in the rearview mirror, I'd suggest we open an issue and close this PR. That way it's clear what work is...
We might want to add this to the ecosystem page when the package is ready?
Deprecate Flux.Optimisers and implicit parameters in favour of Optimisers.jl and explicit parameters
Xref https://github.com/darsnack/ParameterSchedulers.jl/issues/34
Yeah I can take a look tomorrow. What's unclear to me is whether we need some part of the NNlib PR that you linked. Currently, I don't know what code...
I have highlighted the appropriate sections in the text below. We want to sample many `x` values and test whether the mean and variance after dropout matches the paper. 