Jaffer Wilson

Results 46 comments of Jaffer Wilson

Try this: https://github.com/JafferWilson/Process-Data-of-CNN-DailyMail Guess it will solve your tokenization and rest other issues.

@quanghuynguyen1902 Guess you already have opened a new issue https://github.com/abisee/cnn-dailymail/issues/29 Lets go there. Please someone close this issue.

Please let me know are you using the stanford-corenlp-full-2016-10-31/stanford-corenlp-3.7.0.jar or the one with 2017? This error mostly occur when you are not using stanford-corenlp-full-2016-10-31/stanford-corenlp-3.7.0.jar. Please check.

I have created already the processed file you can try that without any issue. Here is the link: https://github.com/JafferWilson/Process-Data-of-CNN-DailyMail Use Python 2.7

@97yogitha No do not use the 2017 one.. use 2016 which is mentioned in the Read.me file of the repository.

Please some one close this issue.

You need to format your data according to the CNN or DM dataset. It will work. Else modify the tokenizing file according to your data. Thats the solution. It is...

@WangLilian @abisee Wang, if you wish to see the ```txt``` version of the data, then you can convert it using the file [data_convert_example.py](https://github.com/tensorflow/models/blob/master/textsum/data_convert_example.py). This will help you convert the `bin...

@qlwang25 Thank you for informing me regarding the link. Here is the [data_convert_example.py](https://github.com/tensorflow/models/blob/master/research/textsum/data_convert_example.py). Hope this helps. @abisee Please close this issue. I guess this issue is resolved and had been...

Try my repository and make it run: https://github.com/abisee/cnn-dailymail#option-1-download-the-processed-data