composer icon indicating copy to clipboard operation
composer copied to clipboard

Fix evaluation code to improve performance

Open vigneshwaran opened this issue 2 years ago • 4 comments
trafficstars

I have been experiencing llm-foundry/eval takes a lot of time compared to lm-evaluation-harness. After digging into the code, I found padding token is appended till the maximum length of the tokenizer.

inp, continuation_span = _make_padded_input(context_enc, continuation_enc, self.max_seq_len,
                                                        self.pad_tok_id)

https://github.com/bmosaicml/composer/blob/1011f90f2653dae103c3837c968071e399b1decc/composer/datasets/in_context_learning_evaluation.py#L418C1-L428C59

My proposal:

Instead of padding till max_seq_len, use the maximum length of the batch.

inp, continuation_span = _make_padded_input(context_enc, continuation_enc, max_len_of_data,
                                                        self.pad_tok_id)

This has improved latency by 400% when I used 2048 as sequence length. It would be even more for models trained with higher sequence length.

vigneshwaran avatar Aug 10 '23 18:08 vigneshwaran