neural_factorization_machine
neural_factorization_machine copied to clipboard
TenforFlow Implementation of Neural Factorization Machine
跑了一下。NFM代码效果并不比FM效果好,不知为什么
https://github.com/hexiangnan/neural_factorization_machine/blob/master/LoadData.py#L47 In the `read_features()` function, you just init a dict, and record the feature names and the first user which get this feature!! Nonsense!! Lost of Data. This is a...
Hello, I was experiencing this issue. https://github.com/hexiangnan/neural_factorization_machine/issues/2 So I replaced calls to sub with subtract because it was changed in TensorFlow https://github.com/tensorflow/tensorflow/issues/7032. After this update, the code will run under...
do you know code example to run criteo-1tb-benchmark fully locally , without spark? kind of online learning? for https://labs.criteo.com/2013/12/download-terabyte-click-logs-2/
FM.py中 elif self.loss_type == 'log_loss': self.out = tf.sigmoid(self.out) if self.lambda_bilinear > 0: self.loss = tf.contrib.losses.log_loss(self.out, self.train_labels, weight=1.0, epsilon=1e-07, scope=None) + tf.contrib.layers.l2_regularizer(self.lamda_bilinear)(self.weights['feature_embeddings']) # regulizer else: self.loss = tf.contrib.losses.log_loss(self.out, self.train_labels, weight=1.0, epsilon=1e-07,...
` all_weights['feature_bias'] = tf.Variable(tf.random_uniform([self.features_M, 1], 0.0, 0.0), name='feature_bias') # features_M * 1` elements in all_weights['feature_bias'] are all zeros. so in ` # _________out _________ Bilinear = tf.reduce_sum(self.FM, 1, keep_dims=True) #...
In deepfm, continuous features is multiplied by its responding embedding vector, while is your codes no multiplication was not seen. Can nfm not deal with continuous features?
# Model. # _________ sum_square part _____________ # get the summed up embeddings of features. nonzero_embeddings = tf.nn.embedding_lookup(self.weights['feature_embeddings'], self.train_features) self.summed_features_emb = tf.reduce_sum(nonzero_embeddings, 1) # None * K # get the...
As the code it seems do not load the deep layer params if use the pretrain model. https://github.com/hexiangnan/neural_factorization_machine/blob/master/NeuralFM.py#L202-L217 Is that on purpose which means daily model has a brand new...
I change the parameter layers from [64] to [64,128,256,128,64],and I meet the error "InvalidArgumentError (see above for traceback): slice index 4 of dimension 0 out of bounds." How could I...