FastChat
FastChat copied to clipboard
Fix for `ValueError: Tokenizer class GPTNeoXTokenizer does not exist or is not currently imported.`
According to the discussion on transformers, there's a fix for FastChat:
https://github.com/huggingface/transformers/issues/17756#issuecomment-1573319214
diff --git a/fastchat/model/model_adapter.py b/fastchat/model/model_adapter.py
index facfbee..c1b6d35 100644
--- a/fastchat/model/model_adapter.py
+++ b/fastchat/model/model_adapter.py
@@ -43,7 +43,7 @@ class BaseAdapter:
def load_model(self, model_path: str, from_pretrained_kwargs: dict):
tokenizer = AutoTokenizer.from_pretrained(
- model_path, use_fast=self.use_fast_tokenizer
+ model_path, use_fast=True
)
model = AutoModelForCausalLM.from_pretrained(
model_path, low_cpu_mem_usage=True, **from_pretrained_kwargs
The fix works for me on EleutherAI_pythia-1.4b-deduped
Could you contribute a pull request?
It seems that things got fixed elsewhere, as I am running with the unmodified file and I can't reproduce this anymore. Will close.