playground
playground copied to clipboard
allow on-the-fly regularization changes
Dear Playground Maintainers,
This PR allows the user to change regularization on-the-fly. Other users have experienced difficulty getting regularization to do anything useful (e.g. #94) In my experiments with the playground, there seems to typically be an early failure-mode for regularization above the minimum 1e-3 for a wide range of initializations. Allowing changes to regularization on-the-fly enables cranking up regularization as high as 0.1 (occasionally 0.3), but only after letting the network find a stable regime.
To me this feature is a win-win since, being hidden from the user, it adds no complexity to the project and yet it allows for more experimentation by the curious user, who is now able to explore effective uses of L1 and L2 regularization. I have found that playing with on-the-fly regularization has led me to a much more intuitive understanding of how L1 and L2 "push" the weights towards sparse and distributed representations, respectively.
I hope you will consider adding this feature!
All the best, David
Dear @dcato98, it seems that your contribution followed a similar philosophy as the one we injected in a fork of the playground (CooLearning : https://coolearning.github.io/playground/) Feel free to give a try and let us know if the many other parameters that you can adjust on the fly inspires you :)
See you there !