Daniel Lindsäth
Daniel Lindsäth
Please put the error message in a comment instead, it's unreadable as a title.
@Sharathnasa, you need to run the text through the Stanford tokenizer Java program first in order to create a token list file to feed to the network. Basically, in Linux,...
@Sharathnasa text as in the entire article without an abstract, yes. That will create a bin file with a single article in it. Use the vocab file you already have...
@Sharathnasa you can't pipe _vi_ into java, use _cat_ to pipe the contents of the text file into java
Nope, just use the old vocab file used for training, and the file created by tokenization as input to the model: `python pointer-generator/run_summarization.py --log_root= --exp_name= --vocab_path= --mode=decode --data_path=`
No, anything in particular there you mean I should be aware of? I never had the need to summarize multiple texts at once, so I haven't looked into that use...
@xiyan524 , it looks like you've copied part of the model and re-used it without changing its name.
Can you post the diff of your changes? It's hard to guess without knowing what you've done, especially since this isn't my project so I don't know it by heart...
linear() is used several times, but you'll notice that it's almost always in a "with variable_scope.variable_scope()" which makes it unique. If you don't do something similar with your matrix, that...
Glad to hear you got it working!