Mikhail Trofimov
Mikhail Trofimov
Hi @jerry-rubikloud ! I think that we need to implement 2 components: 1) BPRSampler. It will generate pairs from source data and put them into batch. Example: for N pairs...
Hi @Darinyazanr! I know how to implement (I mean efficiently) multi-class for 2nd order, but it is not so easy for an arbitrary order. If you don't care about computational...
Hi, yes, it's possible, will add it to my todos. Possible workaround for now: in loop fit model for several iterations, check loss and save if needed.
As I know, TF tends to allocate a lot of memory even if it doesn't actually use it. But you are right, no memory optimization been done here.
Hi, the simplest way is to do data parallelism, I mean just split batch over several GPU
> in FMs there is no explicit batches as in NNs You need to solve an optimization task. While it's common to use sample-wise updates in such settings (for example,...
Hi, any news there?
Hi! Thanks for contribution! Will check it carefully on weekends.
Hi, @kman0! It seems like as-is this PR doesn't work. This mini-example produces assertion error: ``` X_tr = np.random.randn(10000, 23) y_tr = np.zeros(10000) y_tr[::2] = 1 model = TFFMClassifier( order=2,...
Hi @benmccann, I believe you should check https://github.com/dmlc/difacto -- from my point of view, it is the most scalable solution. Btw, FFM (https://www.csie.ntu.edu.tw/~cjlin/libffm/) is a good pure C++ implementation which...