numpy-ml
                                
                                
                                
                                    numpy-ml copied to clipboard
                            
                            
                            
                        Feature add regularizer base class
This pull request closes #20 .
- What I did
ImplementRegularizer Class and Basic documentation.
- How I did it
ReferKeras way of Regularizer.
- How to verify it
- Currently I did not implement test against regularizer.
 
- [x] I updated the docs.
 
This pull request adds a new feature to Numpy-ml. Ask @ddbourgin to take a look.
Sorry to let this sit - I need to think about the best way to include regularization on a per-layer basis. We'll need the loss objects to be able to access the regularization penalties at each layer and then add it to their formulation. We'll also need to adjust the appropriate layer gradients during each stage of backprop. This isn't particularly bad, we just need to make sure that the proper book-keeping is in place. I'm hoping I'll have some time later in the week to work on this.
Also, a general comment: I'd prefer that we avoid directly copying Keras / Torch / tf code or documentation when possible (I realize that for very simple functions like these, there's really only a single way to write them, so obviously use your discretion). While it's fine (and in fact, encouraged) that we compare our implementations against these gold-standards, I think we should focus on trying to do our own work when it comes to implementing / documenting the behavior of the algorithms. A big goal of the project is to supplement packages like Keras by providing more explicit / transparent discussion and documentations of the algorithms.
That's cool. Except for some simple functions. If i want to update a new major feature, we need to discuss before coding. If you have any suggestions or instructions, please let me know.