text-generation-webui
text-generation-webui copied to clipboard
TypeError: not a string
Hi, i have this issue when i load llama-7B on my RTX2080Ti
`(textgen) X:\LLama chat\text-generation-webui>python server.py --model llama-7b --load-in-8bit Loading llama-7b...
===================================BUG REPORT=================================== Welcome to bitsandbytes. For bug reports, please submit your error trace to: https://github.com/TimDettmers/bitsandbytes/issues
Loading checkpoint shards: 100%|███████████████████████████████████████████████████████| 33/33 [00:15<00:00, 2.13it/s]
Traceback (most recent call last):
File "X:\LLama chat\text-generation-webui\server.py", line 194, in
(textgen) X:\LLama chat\text-generation-webui>`
Can you try reconverting the model following the instructions here?
https://github.com/oobabooga/text-generation-webui/wiki/LLaMA-model
I restarted the conversion of the model with the last version script.uninstalled and reinstalled "transformer" with the "custom llama" version and I still have the same error message.
Did you move the tokenizer files into the model folder after converting? I made that mistake originally and had the same error.
yes i have this files in "text-generation-webui\models\llama-7b" folder
- config.json
- generation_config.json
- pytorch_model.bin.index.json
- special_tokens_map.json
- tokenizer_config.json
- pytorch_model-00001-of-00033 to pytorch_model-00033-of-00033.bin
I believe you also need tokenizer.model
. It might be in the folder of the original unconverted model rather than the one created by the conversion script.
hi thanks, that was it i was missing this file ( tokenizer.model ), now i can launch the gui. i have a new error ( RuntimeError: CUDA error: an illegal memory access was encountered ) but my issue is solved. Thanks for answer