rlR icon indicating copy to clipboard operation
rlR copied to clipboard

online learning with neural networks in R is slow

Open markusdumke opened this issue 6 years ago • 8 comments

@smilesun Incremental learning with neural networks in R is very slow. I made some benchmarking experiments, mxnet is twice as fast as keras but still very slow. Not sure what we can do about this?

https://github.com/compstat-lmu/papers_2017_reinforcelearn_software/blob/da9659e583583d3b80d8da5d37f4bf3c876b6f8c/examples/online_learning_with_neural_networks.R#L133-L139

markusdumke avatar Dec 01 '17 11:12 markusdumke

is there a possibility not to run the model update each step?

berndbischl avatar Dec 01 '17 11:12 berndbischl

but to do it in "bulk" like very few iterations? together with replay?

berndbischl avatar Dec 01 '17 11:12 berndbischl

Yes, with a smart replay we could use mlr Bernd Bischl [email protected] schrieb am Fr. 1. Dez. 2017 um 12:28:

but to do it in "buld" like very few iterations? together with replay?

— You are receiving this because you were mentioned. Reply to this email directly, view it on GitHub https://github.com/compstat-lmu/papers_2017_reinforcelearn_software/issues/2#issuecomment-348471989, or mute the thread https://github.com/notifications/unsubscribe-auth/AAi1Rjvm6U-NroBhFfyg0JlviLUYpk9Jks5s7-LsgaJpZM4QyNw8 .

smilesun avatar Dec 01 '17 11:12 smilesun

@smilesun Do you actually know any paper, where they do this? I think in all the Sutton book examples updates are made in each step. But it is certainly possible. Though I think it would still be good to have the usual incremental learning with one or more training examples at each step, because that's what people are used to

markusdumke avatar Dec 01 '17 12:12 markusdumke

Though I think it would still be good to have the usual incremental learning with one or more training examples at each step, because that's what people are used to

i am not suggesting to throw this out. but if an efficient alternative is provided by us, then it doesnt hurt that much anymore if the 1-step update is so slow (for now).

berndbischl avatar Dec 01 '17 12:12 berndbischl

yes of course, maybe we can have a parameter, after how many steps/episodes to update, which the user can set

markusdumke avatar Dec 01 '17 12:12 markusdumke

Maybe we could wrap around the c api for tensor flow

smilesun avatar Dec 08 '17 12:12 smilesun

currently I think the speed is ok, I only have the error below.

/home/sunxd/anaconda2/lib/python2.7/site-packages/keras/callbacks.py:93: UserWarning: Method on_batch_begin() is slow compared to the batch update (0.127413). Check your callbacks.
  % delta_t_median)

smilesun avatar May 17 '18 14:05 smilesun