Roman Ishchenko
Results
2
comments of
Roman Ishchenko
TL;DR: add `tokenizer.pad_token_id = 0` in your code Main problem, that code in ChatGPT relies on pad_token_id, that meta-LLAMA model doesn't use. But if check a little closer into `special_tokens_map.json`...
The problem in pydantic lib, try change version (for ex. `pip install --upgrade pydantic==1.10.12`)