llm-foundry
llm-foundry copied to clipboard
Set pad_token_id to tokenizer.pad_token_id if not set on command line
The hf_chat.py program emits this warning message before each chat response:
The attention mask and the pad token id were not set. As a consequence, you may observe unexpected behavior. Please pass your input's attention_mask to obtain reliable results.
Setting pad_token_id to eos_token_id:0 for open-end generation.
Fixed by setting pad_token_id to tokenizer.eos_token_id if not set on the command line.