hed-dlg-truncated icon indicating copy to clipboard operation
hed-dlg-truncated copied to clipboard

Hierarchical Encoder Decoder RNN (HRED) with Truncated Backpropagation Through Time (Truncated BPTT)

Results 17 hed-dlg-truncated issues
Sort by recently updated
recently updated
newest added

Hi Julian, I'd really appreciate it if you could share the Movie-Triples dataset. This is for an academic research project and of course, the relevant paper will be duly cited....

Fix crash caused by saving model to a file in non-existent subdirectory. Fix crash caused by using cPickle.load/dump a file which is not opened with binary mode.

I receive the following error when I run sample.py after a model trained with VHRED. Traceback (most recent call last): File "sample.py", line 114, in main() File "sample.py", line 101,...

thank you for your contribution for dialogue generation. when I read your paper 'A Hierarchical Latent Variable Encoder-Decoder Model for Generating Dialogues', the formula used to compute the output of...

I am facing error 'InconsistencyError: Trying to reintroduce a removed node' while running HRED on ubuntu dataset. Here is the stack trace:- ERROR (theano.gof.opt): SeqOptimizer apply 2017-11-14 01:11:53,554: theano.gof.opt: ERROR:...

Hi there, Thanks for making your code available for independent review. However, reusing code without an explicit license can be problematic. If you're wanting people to reuse this code, it'd...

Hi, I want to run this code with a new dataset and I have some questions about that. 1. According to your description in **Creating Datasets**, I think the format...

**self.encoder are similar -- in both condition** if self.add_latent_gaussian_per_utterance: self.encoder_fn = theano.function(inputs=[self.x_data, self.x_data_reversed, \ self.x_max_length], \ outputs=[h, hs_complete, hd], on_unused_input='warn', name="encoder_fn") #self.encoder_fn = theano.function(inputs=[self.x_data, self.x_data_reversed, \ # self.x_max_length], \ #...

python convert-wordemb-dict2emb-matrix.py ./Data/training.dict.pkl ./wordEmb/GoogleNews-vectors-negative300.bin Word2Vec_WordEm The following non-word tokens will not be extracted from the pretrained embeddings: ['', '', '', '', '', '.', ',', '``', "''", '[', ']', '`', '-',...

python convert-wordemb-dict2emb-matrix.py Data/training.dict.pkl WordEmb/GoogleNews-vectors-negative300.bin --apply_spelling_corrections --emb_dim 300 Word2Vec_WordEmb raise Exception("Embedding dictionary file not found!") Exception: Embedding dictionary file not found!