pythia icon indicating copy to clipboard operation
pythia copied to clipboard

No EOD Tokens in EleutherAI/pile-deduped-pythia-preshuffled

Open markschoene opened this issue 1 year ago • 2 comments

According to the 20B_tokenizer.json, the end of document (EOD) token has id 0 and is denoted <|endoftext|>. Some people have raised in ealier issues that there are no EOD tokens in EleutherAI/pile-deduped-pythia-preshuffled. This issue seems to be standing since January 2024. I processed the pile as instructed in the README.md based on EleutherAI/pile-deduped-pythia-preshuffled. However, I can confirm that there appear to be no EOD tokens in the dataset. Beyond using the batch_viewer.py, I also started a training loop and recorded x.min() at the beginning of my forward(x) function. Both methods show that the smallest token ID is 2. From this I conclude that there are no EOD tokens in this version of the dataset. This causes serious issues for both training and evaluating on other datasets that use the EOD token (which never got gradient updates for a model trained on EleutherAI/pile-deduped-pythia-preshuffled). Would it be possible to provide a tokenized pile with 2049 tokens per sequence that does separate documents with EODs?

markschoene avatar Oct 04 '24 13:10 markschoene

This issues was mentioned without replies from the team here: https://github.com/EleutherAI/pythia/issues/123#issuecomment-1882232326

markschoene avatar Oct 04 '24 15:10 markschoene

Sorry for the late reply!

This causes serious issues for both training and evaluating on other datasets that use the EOD token (which never got gradient updates for a model trained on EleutherAI/pile-deduped-pythia-preshuffled). Would it be possible to provide a tokenized pile with 2049 tokens per sequence that does separate documents with EODs?

Can you explain what the value in doing this would be?

StellaAthena avatar May 23 '25 11:05 StellaAthena