Benjamin Bossan

Results 28 comments of Benjamin Bossan

Yes, for this specific case, that would work. For other cases, that could be an awkward solution. I could imagine that a more general solution would have a "cache" that...

Okay, so you would suggest to use this if extra data needs to be saved?

But isn't the model just a blob? Instead of persisting the model, could we not persist something like `{'model': model, 'cache': cache}`? That way, we don't need to store something...

In general, I see no obstacle to passing `Xi` and `yi` to `on_grad_computed`. If you'd like, you can open a PR for this. For your particular problem, however, I believe...

> I feel like this kind of thing -- data augmentation or regularization -- shouldn't really need a whole new NeuralNet class to make work There are many ways to...

I believe we largely agree on what should and what shouldn't be done. The only missing piece in the puzzle is what use cases are general enough to require a...

> One thing all these issues have in common is that the input data changed. Almost, since 1. does not necessarily imply that, but the exhaustion problem could be solved...

I would really like to see a more complete solution than just looking at the amount of data. As mentioned, it is easy to run into errors that are not...

> We could even just change the default of caching to `false`, and consider `caching` an "advanced feature". I would be reluctant to disable something that probably works just fine...