Alex Rogozhnikov
Alex Rogozhnikov
https://github.com/Kyubyong/dc_tts/blob/8b38110875920923343778ff959d01501323765e/train.py#L131 I think you don't need to create train operation on each element here (there are quite many parameters in the model)
Integrating einsum with einops is a good direction - numpy.einsum is available (non-capitals, allows spaces, has some bugs before 1.16) - tf.einsum accepts only two arguments, does not work with...
Originated from patch #31
need to investigate if backend packages make strides available for analysis (or at least as_contiguous). This may help with optimizations
It would be nice to have it, but there are problems with backends - numpy.logaddexp.reduce is available (scipy.special.logsumexp is better, but I can't use it) - tf.reduce_logsumexp is available -...
- normally, it is an `mxnet` issue - seems it was like for ages (see code around `MXNET_SPECIAL_MAX_NDIM`) --- After digging into mxnet: - neighboring reduced axes (and non-reduced axes)...
(Leaving this as an open answer to common question) Why GBReweighter/UGradientBoostingClassifier provide different weights after each training? Both algorithms are based on stochastic tree boosting. Settings like `subsample` and `max_features`...
This issue was observed and reported by Jack Wimberley. If there is a region with very few original samples, decision tree can build a leaf with samples only from target...
subj. This also shall be mentioned in documentation and howto notebook