tf_repos icon indicating copy to clipboard operation
tf_repos copied to clipboard

DeepFM模型中的损失计算

Open eralvc opened this issue 7 years ago • 5 comments

165行左右,构建deep全连接时,给变量都加上了l2正则 y_deep = tf.contrib.layers.fully_connected(inputs=deep_inputs, num_outputs=1, activation_fn=tf.identity,
weights_regularizer=tf.contrib.layers.l2_regularizer(l2_reg), scope='deep_out') 然后在189行左右定义损失函数 loss = tf.reduce_mean(tf.nn.sigmoid_cross_entropy_with_logits(logits=y, labels=labels)) +
l2_reg * tf.nn.l2_loss(FM_W) +
l2_reg * tf.nn.l2_loss(FM_V)

我理解,上面的损失函数没有把前面通过weights_regularizer正则的变量取出来 所以应该改成 loss = tf.reduce_mean(tf.nn.sigmoid_cross_entropy_with_logits(logits=y, labels=labels)) +
l2_reg * tf.nn.l2_loss(FM_W) +
l2_reg * tf.nn.l2_loss(FM_V)+
tf.reduce_sum(tf.get_collection(tf.GraphKeys.REGULARIZATION_LOSSES))

eralvc avatar Oct 26 '18 10:10 eralvc

lambdaji avatar Oct 27 '18 07:10 lambdaji

https://github.com/lambdaji/tf_repos/blob/master/deep_ctr/Model_pipeline/DeepFM.py#L189

loss = tf.reduce_mean(tf.nn.sigmoid_cross_entropy_with_logits(logits=y, labels=labels)) + 
l2_reg * tf.nn.l2_loss(FM_W) + 
l2_reg * tf.nn.l2_loss(FM_V)+ 

此处的loss如果设置为mini-batch相关参数的loss,back-propagation时会快很多

JenkinsY94 avatar Nov 11 '18 05:11 JenkinsY94

https://github.com/lambdaji/tf_repos/blob/master/deep_ctr/Model_pipeline/DeepFM.py#L189

loss = tf.reduce_mean(tf.nn.sigmoid_cross_entropy_with_logits(logits=y, labels=labels)) + 
l2_reg * tf.nn.l2_loss(FM_W) + 
l2_reg * tf.nn.l2_loss(FM_V)+ 

此处的loss如果设置为mini-batch相关参数的loss,back-propagation时会快很多

这个需要怎么设置呢?

nikenj avatar Nov 12 '18 08:11 nikenj

我测试了下,很奇怪。加了和没加效果几乎相同。

qk-huang avatar Nov 13 '18 07:11 qk-huang

使用batch_normal_layer,不需要加上这个ops嘛? update_ops = tf.get_collection(tf.GraphKeys.UPDATE_OPS) with tf.control_dependencies(update_ops): train_op = optimizer.minimize(loss, global_step=tf.train.get_global_step()

nikenj avatar Nov 14 '18 07:11 nikenj