text-generation-webui icon indicating copy to clipboard operation
text-generation-webui copied to clipboard

Not able to load Fastchat 3b model

Open MbBrainz opened this issue 1 year ago • 1 comments

Describe the bug

when starting server and selecting lmsys/fastchat-t5-3b-v1.0 (downloaded using download_model.py) it breaks and says:

Is there an existing issue for this?

  • [X] I have searched the existing issues

Reproduction

download fastchat model from hygging face using download_model.py

run ./start_linux.sh

select fastchat model

--> Error

Screenshot

No response

Logs

INFO:Loading lmsys_fastchat-t5-3b-v1.0...
Traceback (most recent call last):
  File "/home/user1/code/text-gn-webui/oobabooga_linux/text-generation-webui/server.py", line 948, in <module>
    shared.model, shared.tokenizer = load_model(shared.model_name)
  File "/home/user1/code/text-gn-webui/oobabooga_linux/text-generation-webui/modules/models.py", line 253, in load_model
    tokenizer = AutoTokenizer.from_pretrained(Path(f"{shared.args.model_dir}/{model_name}/"), trust_remote_code=trust_remote_code)
  File "/home/user1/code/text-gn-webui/oobabooga_linux/installer_files/env/lib/python3.10/site-packages/transformers/models/auto/tokenization_auto.py", line 702, in from_pretrained
    return tokenizer_class.from_pretrained(pretrained_model_name_or_path, *inputs, **kwargs)
  File "/home/user1/code/text-gn-webui/oobabooga_linux/installer_files/env/lib/python3.10/site-packages/transformers/tokenization_utils_base.py", line 1811, in from_pretrained
    return cls._from_pretrained(
  File "/home/user1/code/text-gn-webui/oobabooga_linux/installer_files/env/lib/python3.10/site-packages/transformers/tokenization_utils_base.py", line 1841, in _from_pretrained
    slow_tokenizer = (cls.slow_tokenizer_class)._from_pretrained(
  File "/home/user1/code/text-gn-webui/oobabooga_linux/installer_files/env/lib/python3.10/site-packages/transformers/tokenization_utils_base.py", line 1965, in _from_pretrained
    tokenizer = cls(*init_inputs, **init_kwargs)
  File "/home/user1/code/text-gn-webui/oobabooga_linux/installer_files/env/lib/python3.10/site-packages/transformers/models/t5/tokenization_t5.py", line 154, in __init__
    self.sp_model.Load(vocab_file)
  File "/home/user1/code/text-gn-webui/oobabooga_linux/installer_files/env/lib/python3.10/site-packages/sentencepiece/__init__.py", line 905, in Load
    return self.LoadFromFile(model_file)
  File "/home/user1/code/text-gn-webui/oobabooga_linux/installer_files/env/lib/python3.10/site-packages/sentencepiece/__init__.py", line 310, in LoadFromFile
    return _sentencepiece.SentencePieceProcessor_LoadFromFile(self, arg)
TypeError: not a string

Done!

System Info

## system specs: 

### OS 
Operating System: Ubuntu 22.04.2 LTS
          Kernel: Linux 5.19.0-41-generic
    Architecture: x86-64

### CPU
CPU op-mode(s):        32-bit, 64-bit
  Address sizes:         39 bits physical, 48 bits virtual
  Byte Order:            Little Endian
CPU(s):                  8
  On-line CPU(s) list:   0-7
Vendor ID:               GenuineIntel
  Model name:            Intel(R) Core(TM) i7-6700HQ CPU @ 2.60GHz

### GPU
nvidia quadro M1000M
NVIDIA-SMI 530.41.03 
Driver Version: 530.41.03    
CUDA Version: 12.1

MbBrainz avatar May 13 '23 16:05 MbBrainz

Will it load if you edit the model config and change "is_encoder_decoder": true, to false?

Ph0rk0z avatar May 13 '23 17:05 Ph0rk0z

When i do that i get the following error:

ValueError: Unrecognized configuration class <class 'transformers.models.t5.configuration_t5.T5Config'> for this kind of AutoModel: AutoModelForCausalLM.
Model type should be one of BartConfig, BertConfig, BertGenerationConfig, BigBirdConfig, BigBirdPegasusConfig, BioGptConfig, BlenderbotConfig, BlenderbotSmallConfig, BloomConfig, CamembertConfig, CodeGenConfig, CpmAntConfig, CTRLConfig,
Data2VecTextConfig, ElectraConfig, ErnieConfig, GitConfig, GPT2Config, GPT2Config, GPTBigCodeConfig, GPTNeoConfig, GPTNeoXConfig, GPTNeoXJapaneseConfig, GPTJConfig, LlamaConfig, MarianConfig, MBartConfig, MegaConfig, MegatronBertConfig,
MvpConfig, OpenAIGPTConfig, OPTConfig, PegasusConfig, PLBartConfig, ProphetNetConfig, QDQBertConfig, ReformerConfig, RemBertConfig, RobertaConfig, RobertaPreLayerNormConfig, RoCBertConfig, RoFormerConfig, Speech2Text2Config,
TransfoXLConfig, TrOCRConfig, XGLMConfig, XLMConfig, XLMProphetNetConfig, XLMRobertaConfig, XLMRobertaXLConfig, XLNetConfig, XmodConfig.

MbBrainz avatar May 18 '23 20:05 MbBrainz

It's loading as the wrong type. This is a seq2seq.

Ph0rk0z avatar May 19 '23 11:05 Ph0rk0z

@MbBrainz It appear that download-model.py missed a file for fastchat t5. Also seems like requirement.txt is missing a module too. (excuse me, as I am really new with all of these text-generation so I can't make a pull-request)

Here are the steps do fix it manually:

  • go to Huggingface repo of fastchat t5
  • downloads spiece.model file manually
  • put the file into your text-generation-webui/models/lmsys_fastchat-t5-3b-v1.0/ (presume you have already download the model)
  • install protobuf with pip install protobuf==3.20.3

image

the-unsoul avatar May 20 '23 19:05 the-unsoul

@the-unsoul i put spiece.model file manually and installed protobuf==3.19.6 for dependency compatibility. And finnally i can load a model with no tokenizer.json but spiece.model. thank you.

(For people coming from future) conda env list shows:

base                     C:\oobabooga_windows\installer_files\conda
                         C:\oobabooga_windows\installer_files\env

and you need to conda activate C:\oobabooga_windows\installer_files\env then pip install protobuf==3.19.6

this is my env path C:\oobabooga_windows\installer_files\conda\condabin C:\oobabooga_windows\installer_files\conda C:\oobabooga_windows\installer_files\conda\Scripts i haven't installed any python.

szriru avatar May 29 '23 21:05 szriru

I had the same problem with the following model: https://huggingface.co/rinna/japanese-gpt-neox-3.6b-instruction-ppo

The custom model download function does not seem to download the required files at this time.

Solution:

  1. python download-model.py rinna/japanese-gpt-neox-3.6b-instruction-ppo
  2. Download spiece.model and move to text-generation-webui/models/rinna_japanese-gpt-neox-3.6b-instruction-ppo/
  3. Download spiece.vocab and move to text-generation-webui/models/rinna_japanese-gpt-neox-3.6b-instruction-ppo/
  4. pip install protobuf==3.20.3

nitky avatar Jun 04 '23 04:06 nitky

This issue has been closed due to inactivity for 30 days. If you believe it is still relevant, please leave a comment below.

github-actions[bot] avatar Aug 13 '23 23:08 github-actions[bot]