Nabiha Asghar
Nabiha Asghar
For the function model:getParameters(), here's what the documentation (https://github.com/torch/nn/blob/master/doc/module.md#flatparameters-flatgradparameters-getparameters) says: > Since the storage of every weight and gradWeight is changed, this function should be called only once on a...
I'm looking at dataset.lua, where `decoderInputs` are being set up. ``` decoderInputs = torch.IntTensor(maxTargetOutputSeqLen-1,size):fill(0) for samplenb = 1, #targetSeqs do trimmedEosToken = targetSeqs[samplenb]:sub(1,-2) for word = 1, trimmedEosToken:size(1) do if...
MMI
Hi. I'm parsing through your code and I'm trying to figure out where you are using MMI (maximum mutual information) criterion. Could you please point that out? Thanks
I came up with the following sanity check to ensure that the implementation and word embeddings etc are good. I created a dataset of 100,000 lines, that has the following...
Just curious ... will you be adding the code for your paper 'Deep Reinforcement Learning for Dialogue Generation' in this repository?