encoder-agnostic-adaptation icon indicating copy to clipboard operation
encoder-agnostic-adaptation copied to clipboard

source truncation size in summarization task

Open XinyuHua opened this issue 5 years ago • 1 comments

Hi,

According to the README file, for summarization (cnndm) task the following truncation setup is recommended: -src_seq_length_trunc 400

However, on the training data, the average/median length of the source is 925/841, more than 90% of the data is longer than 400 BPE tokens, would it be problematic to throw away the rest of the text? Or is this simply for efficiency consideration? Thanks!

XinyuHua avatar Sep 16 '19 22:09 XinyuHua

Hi,

This is a preprocessing choice we inherited from previous summarization work with OpenNMT, which found that the first 400 tokens is often plenty to compose a good summary. That work was largely conducted with LSTMs, though, so perhaps performance would improve measurably by increasing this truncation.

zackziegler95 avatar Sep 23 '19 15:09 zackziegler95