lm-evaluation-harness icon indicating copy to clipboard operation
lm-evaluation-harness copied to clipboard

Pile tasks on big-refactor use dataset_names from old dataset loader that don't exist on HF

Open yeoedward opened this issue 1 year ago • 2 comments

Task example: https://github.com/EleutherAI/lm-evaluation-harness/blob/big-refactor/lm_eval/tasks/pile/pile_arxiv.yaml#L7

HF dataset: https://huggingface.co/datasets/EleutherAI/pile

Original dataset loader prior to big-refactor: https://github.com/EleutherAI/lm-evaluation-harness/blob/master/lm_eval/datasets/pile/pile.py

@haileyschoelkopf mentioned that using this loading script should work if we upload it to HF and point the Pile tasks to that new dataset.

yeoedward avatar Aug 03 '23 15:08 yeoedward

Adding the file "pile.py" at "lm-evaluation-harness/EleutherAI/the_pile/the_pile.py" does indeed fix the issue. Additionally changing the test split to "test" in pile_arxiv.yaml (line 9)

This recipe works pretty fast, but I observe this strange trend where the first few samples are processed slow (which is understandable), the middle samples are processed at an extremely fast speed, and then in the end the last few samples again take a lot of time. When using "accelerate launch" this almost halts forever (I eventually killed the process after waiting for a few minutes), whereas using a single GPU does allow me to get final output.

pratyushmaini avatar Aug 11 '23 18:08 pratyushmaini

Just an update to the above. Since PILE is no longer public now, you may want to modify the _URLS to your local path to pile. This is line 44 of the current file pile.py

_URLS = { "validation": "/data/the_pile/val.jsonl.zst", "test": "/data/the_pile/test.jsonl.zst", }

Also, there have been some changes to the repo since the last comment. The file should be placed at "lm-evaluation-harness/EleutherAI/pile/pile.py"

pratyushmaini avatar Oct 26 '23 17:10 pratyushmaini