textsum-gan
textsum-gan copied to clipboard
Tensorflow re-implementation of GAN for text summarization
您好,我想问下discriminator.py文件中loss函数为什么是交叉熵呀,而不是min φ EY ∼pdata [logDφ(Y )] ] EY ∼Gθ [log(1 1 Dφ(Y ))]
代码如下: self.scores = tf.nn.xw_plus_b(self.h_drop, W, b, name="scores")#(?, 2)两列分别代表了为假的概率和为真的概率 self.ypred_for_auc = tf.nn.softmax(self.scores)#(?, 2) self.predictions = tf.argmax(self.scores, 1, name="predictions")#(?,)#0代表预测的是假,1代表预测的是真 # CalculateMean cross-entropy loss with tf.name_scope("loss"): losses = tf.nn.softmax_cross_entropy_with_logits(logits=self.scores, labels=self.input_y) self.loss = tf.reduce_mean(losses)...
我看了这篇论文,好不容易找到您写的源代码,对我来说真是雪中送碳,可是不知道您有没有和其他方法做对比的代码呀
The code needs to be upgraded to TensorFlow 2.x as TensorFlow 1.x is no longer available.
In gen_sample.py file, **parser.add_argument('--decode_dir', required=True, help="root of the decoded directory")** this line refers to the decoded directory with two child directories (reference and decoded). I don't understand how to create...
请问有pytorch版本的代码吗?
I'm trying to run the code on the CNN/Dailymail dataset following the instructions in the readme but the loss doesn't seem to be decreasing and when I try to decode...
rewords calculate by 1st batch and then multiply 2nd batch?
用自己的数据训练怎么准备discriminator_train_data.npz
Can you post the training time and the results obtained for this implementation.