Toy-Neural-Network-JS
Toy-Neural-Network-JS copied to clipboard
Neural Network Feature (Wish) List
Neural Network Feature (Wish) List
These are the basics:
- [x] Basic 2-layer network with bias
- [x] Activation functions which reuse the activation on the backwards pass (sigmoid, tanh, ReLU)
- [x] MSE cost function
- [x] Adjustable learning rate
- [ ] Additional examples and tests See #41 and #66 and #76.
These would be interesting to add:
- [ ] Multiple hidden layers Per #61.
- [ ] Semi-arbitrary activation functions Per #70 and #75.
- [ ] Arbitrary cost functions See here.
- [ ] Automatically adapting learning rate (Momentum) Per #65. Also see here via here.
- [ ] Multiple initial weighting strategies See here.
- [ ] Convolution layers See here. Also see here via here.
- [ ] Simple RNN See here via here.
- [ ] Advanced optimization See here.
The two big ones for me are the additional examples and the multiple hidden layers.
Those two things open the door for lots of other possibilities. The additional example can be used for testing community contributed features. The multiple hidden layers open the door for convolution layers and the like. Which then opens the door for some really cool art.