inference icon indicating copy to clipboard operation
inference copied to clipboard

Using alternative sequence lengths for SQuAD-based models in the Open division

Open psyhtest opened this issue 3 years ago • 4 comments

For BERT Large, we tokenize the SQuAD v1.1 dataset into sequences of up to 384 symbols. For smaller models such as BERT Base, the sequence length of 128 is often used. Would it be allowed to tokenize the dataset into shorter sequences for submissions to the Open division?

psyhtest avatar Dec 20 '22 15:12 psyhtest

@rnaidu02 Let us discuss this in the first WGM in 2023.

nv-ananjappa avatar Dec 20 '22 17:12 nv-ananjappa

No disagreement with the proposal from the 1/3/2023 IWG meeting

rnaidu02 avatar Jan 03 '23 17:01 rnaidu02

I think that this idea breaks uniformity and should not be in the open division.

By reducing sequence length, you get a "free" speedup and bypass one of the main problems of transformers, which is the attention mechanism (time complexity of 384^2 >> 128^2 for example).

In addition to that, this makes "advancements" and comparisons with previous mlperf submissions irrelevant as it is not an apples-to-apples comparison if we compare performance of seq_len 384 with seq_len 128 for example.

najeeb5 avatar Jan 10 '23 14:01 najeeb5

From the 1/10/2023 IWG meeting, it is determined that seq is part of the benchmark definition (seq length of 384).

rnaidu02 avatar Jan 10 '23 17:01 rnaidu02