Maxim Podkolzine
Maxim Podkolzine
Got it. Do you use the same data as I did, i.e. the set of learning curves?
Hi @ymcasky Here is an [example of custom data provider](https://github.com/maxim5/hyper-engine/blob/master/hyperengine/examples/4_1_word2vec_embedding.py): it's a simple interface and you basically need to implement `next_batch` method. Note however that the interface currently works with...
Hi @ymcasky , 1. Since you only need to provide `next_batch` you can load a new numpy array for each batch without holding the whole training set in memory. I'll...
@linmufeng See my comment in https://github.com/tensorflow/tensorflow/issues/11157#issuecomment-353725791. It solves this problem too.
The solution for 61 is actually more useful: it didn't know about `flat`. If you drop one of them, I think it'd be great to mention `flat` in the answer.
@rougier I see, thanks. "Accumulate" means sum up, not collect. And the task directly corresponds to `weights` in `numpy.bincount`. Here's how they describe it: > If _weights_ is specified the...
Hi @ealonsodb, Sorry for the delay and thanks for the explanation. Will it work if I use top-k expression, e.g. `query` + limit?
@haraldk I'm sure you've seen https://github.com/sejda-pdf/webp-imageio
Yeah, don't have a lot of choices. That project seems to work, but hasn't updated `libwebp` for quite a while. So I mean if JNI is ok for you, you...
Current implementation looks really dangerous. A real case from StackOverflow: [Mysterious ReLu](https://stackoverflow.com/q/48993004/712995)