ybCliff
ybCliff
Though I can find you have implemented this function in your another repository 'video-caption-openNMT.pytorch', it's hard to comprehend it. Would you please make it available in this repository? Thanks a...
为了能够更好理解论文中的强化学习部分,我应该阅读哪些相关的资料呢? 由于我平时使用的是pytorch,所以看tf的代码不太适应,看loss的部分也没怎么懂: 1. loss_task,这个好理解,就是求预测单词的交叉熵 2. rewards:单词预测地越准,回报越高 3. neg_log_prob:这里取了反(所以求最大化期望变成了求最小化loss),动作的概率越大,值越小 4. neg_log_prob_step:这里是neg_log_prob乘了一个上三角矩阵么?有啥含义吗?得出来的结果是这样的? `[ sample 1: [a11, a11+a12, a11+a12+a13, ..., ] sample 2: [a21, a21+a22, a21+a22+a23, ..., ] ]` 5. loss_RL:求最终强化学习的loss的时候乘以了一个0.3,是一个超参么?文中所说的entropy regularization体现在了哪里,没看出来 会考虑转成pytorch语言么?...
There are 'unverified' and 'clean' captions in microsoft corpus file, roughly 40 captions in English per video. When I use cocoeval to achieve the metrics like BLEU@4, METEOR, ROUGE_L and...