coref icon indicating copy to clipboard operation
coref copied to clipboard

Suggestion for doing core for longer sequences?

Open ehsan-soe opened this issue 4 years ago • 1 comments

Hi,

First, thank you for providing this valuable resource. According to Table 4 of the Bert paper, for long sequences with length 1152+ the performance declined. I wonder if I want to do the coref for my dataset in which average sequence length is 1500+, do you suggest using 'spanbert' on my data as it is. Or it is better to segment the data into pieces of length 512? Of course both has it's drawbacks in negatively effecting the performance of pertained model but which approach do you suggest?

ehsan-soe avatar Jan 09 '20 23:01 ehsan-soe

Thanks for your interest, Ehsan. I'm not sure I understand the choices you're thinking about. The pre-trained SpanBERT model can only encode documents up to 512 tokens in a single instance. We handle longer documents by splitting them into non-overlapping chunks (we could not get overlapping chunks to work better), and encoding each independently using BERT. So span pairs in different chunks are only connected via the MLPs. I'm not sure what alternative you're referring to.

mandarjoshi90 avatar Jan 10 '20 05:01 mandarjoshi90