FastChat
FastChat copied to clipboard
AttributeError: 'Encoding' object has no attribute 'num_tokens'
fschat==0.2.36
WARNING 02-29 16:06:12 config.py:140] gptq quantization is not fully optimized yet. The speed can be slower than non-quantized models.
INFO 02-29 16:06:12 llm_engine.py:72] Initializing an LLM engine with config: model='/mnt/Qwen-1_8B-Chat-Int4', tokenizer='/mnt/Qwen-1_8B-Chat-Int4', tokenizer_mode=auto, revision=None, tokenizer_revision=None, trust_remote_code=True, dtype=torch.float16, max_seq_len=8192, download_dir=None, load_format=auto, tensor_parallel_size=1, quantization=gptq, seed=0)
WARNING 02-29 16:06:13 tokenizer.py:66] Using a slow tokenizer. This might cause a significant slowdown. Consider using a fast tokenizer instead.
2024-02-29 16:06:15 | INFO | datasets | PyTorch version 2.2.0 available.
INFO 02-29 16:06:25 llm_engine.py:219] # GPU blocks: 4060, # CPU blocks: 1365
2024-02-29 16:06:32 | INFO | model_worker | Loading the model ['Qwen-1_8B-Chat-Int4'] on worker 92230372, worker type: vLLM worker...
2024-02-29 16:06:32 | INFO | model_worker | Register to controller
2024-02-29 16:06:32 | ERROR | stderr | INFO: Started server process [65]
2024-02-29 16:06:32 | ERROR | stderr | INFO: Waiting for application startup.
2024-02-29 16:06:32 | ERROR | stderr | INFO: Application startup complete.
2024-02-29 16:06:32 | ERROR | stderr | INFO: Uvicorn running on http://0.0.0.0:21002 (Press CTRL+C to quit)
2024-02-29 16:07:17 | INFO | model_worker | Send heart beat. Models: ['Qwen-1_8B-Chat-Int4']. Semaphore: None. call_ct: 0. worker_id: 92230372.
2024-02-29 16:08:02 | INFO | model_worker | Send heart beat. Models: ['Qwen-1_8B-Chat-Int4']. Semaphore: None. call_ct: 0. worker_id: 92230372.
2024-02-29 16:08:08 | INFO | stdout | INFO: 127.0.0.1:60300 - "POST /worker_get_conv_template HTTP/1.1" 200 OK
2024-02-29 16:08:08 | INFO | stdout | INFO: 127.0.0.1:60308 - "POST /model_details HTTP/1.1" 200 OK
2024-02-29 16:08:08 | INFO | stdout | INFO: 127.0.0.1:60320 - "POST /count_token HTTP/1.1" 500 Internal Server Error
2024-02-29 16:08:08 | ERROR | stderr | ERROR: Exception in ASGI application
2024-02-29 16:08:08 | ERROR | stderr | Traceback (most recent call last):
2024-02-29 16:08:08 | ERROR | stderr | File "/root/miniconda3/lib/python3.9/site-packages/fastchat/serve/base_model_worker.py", line 156, in count_token
2024-02-29 16:08:08 | ERROR | stderr | input_ids = self.tokenizer(prompt).input_ids
2024-02-29 16:08:08 | ERROR | stderr | TypeError: 'Encoding' object is not callable
2024-02-29 16:08:08 | ERROR | stderr |
2024-02-29 16:08:08 | ERROR | stderr | During handling of the above exception, another exception occurred:
2024-02-29 16:08:08 | ERROR | stderr |
2024-02-29 16:08:08 | ERROR | stderr | Traceback (most recent call last):
2024-02-29 16:08:08 | ERROR | stderr | File "/root/miniconda3/lib/python3.9/site-packages/uvicorn/protocols/http/httptools_impl.py", line 419, in run_asgi
2024-02-29 16:08:08 | ERROR | stderr | result = await app( # type: ignore[func-returns-value]
2024-02-29 16:08:08 | ERROR | stderr | File "/root/miniconda3/lib/python3.9/site-packages/uvicorn/middleware/proxy_headers.py", line 84, in __call__
2024-02-29 16:08:08 | ERROR | stderr | return await self.app(scope, receive, send)
2024-02-29 16:08:08 | ERROR | stderr | File "/root/miniconda3/lib/python3.9/site-packages/fastapi/applications.py", line 1054, in __call__
2024-02-29 16:08:08 | ERROR | stderr | await super().__call__(scope, receive, send)
2024-02-29 16:08:08 | ERROR | stderr | File "/root/miniconda3/lib/python3.9/site-packages/starlette/applications.py", line 123, in __call__
2024-02-29 16:08:08 | ERROR | stderr | await self.middleware_stack(scope, receive, send)
2024-02-29 16:08:08 | ERROR | stderr | File "/root/miniconda3/lib/python3.9/site-packages/starlette/middleware/errors.py", line 186, in __call__
2024-02-29 16:08:08 | ERROR | stderr | raise exc
2024-02-29 16:08:08 | ERROR | stderr | File "/root/miniconda3/lib/python3.9/site-packages/starlette/middleware/errors.py", line 164, in __call__
2024-02-29 16:08:08 | ERROR | stderr | await self.app(scope, receive, _send)
2024-02-29 16:08:08 | ERROR | stderr | File "/root/miniconda3/lib/python3.9/site-packages/starlette/middleware/exceptions.py", line 62, in __call__
2024-02-29 16:08:08 | ERROR | stderr | await wrap_app_handling_exceptions(self.app, conn)(scope, receive, send)
2024-02-29 16:08:08 | ERROR | stderr | File "/root/miniconda3/lib/python3.9/site-packages/starlette/_exception_handler.py", line 64, in wrapped_app
2024-02-29 16:08:08 | ERROR | stderr | raise exc
2024-02-29 16:08:08 | ERROR | stderr | File "/root/miniconda3/lib/python3.9/site-packages/starlette/_exception_handler.py", line 53, in wrapped_app
2024-02-29 16:08:08 | ERROR | stderr | await app(scope, receive, sender)
2024-02-29 16:08:08 | ERROR | stderr | File "/root/miniconda3/lib/python3.9/site-packages/starlette/routing.py", line 758, in __call__
2024-02-29 16:08:08 | ERROR | stderr | await self.middleware_stack(scope, receive, send)
2024-02-29 16:08:08 | ERROR | stderr | File "/root/miniconda3/lib/python3.9/site-packages/starlette/routing.py", line 778, in app
2024-02-29 16:08:08 | ERROR | stderr | await route.handle(scope, receive, send)
2024-02-29 16:08:08 | ERROR | stderr | File "/root/miniconda3/lib/python3.9/site-packages/starlette/routing.py", line 299, in handle
2024-02-29 16:08:08 | ERROR | stderr | await self.app(scope, receive, send)
2024-02-29 16:08:08 | ERROR | stderr | File "/root/miniconda3/lib/python3.9/site-packages/starlette/routing.py", line 79, in app
2024-02-29 16:08:08 | ERROR | stderr | await wrap_app_handling_exceptions(app, request)(scope, receive, send)
2024-02-29 16:08:08 | ERROR | stderr | File "/root/miniconda3/lib/python3.9/site-packages/starlette/_exception_handler.py", line 64, in wrapped_app
2024-02-29 16:08:08 | ERROR | stderr | raise exc
2024-02-29 16:08:08 | ERROR | stderr | File "/root/miniconda3/lib/python3.9/site-packages/starlette/_exception_handler.py", line 53, in wrapped_app
2024-02-29 16:08:08 | ERROR | stderr | await app(scope, receive, sender)
2024-02-29 16:08:08 | ERROR | stderr | File "/root/miniconda3/lib/python3.9/site-packages/starlette/routing.py", line 74, in app
2024-02-29 16:08:08 | ERROR | stderr | response = await func(request)
2024-02-29 16:08:08 | ERROR | stderr | File "/root/miniconda3/lib/python3.9/site-packages/fastapi/routing.py", line 278, in app
2024-02-29 16:08:08 | ERROR | stderr | raw_response = await run_endpoint_function(
2024-02-29 16:08:08 | ERROR | stderr | File "/root/miniconda3/lib/python3.9/site-packages/fastapi/routing.py", line 191, in run_endpoint_function
2024-02-29 16:08:08 | ERROR | stderr | return await dependant.call(**values)
2024-02-29 16:08:08 | ERROR | stderr | File "/root/miniconda3/lib/python3.9/site-packages/fastchat/serve/vllm_worker.py", line 231, in api_count_token
2024-02-29 16:08:08 | ERROR | stderr | return worker.count_token(params)
2024-02-29 16:08:08 | ERROR | stderr | File "/root/miniconda3/lib/python3.9/site-packages/fastchat/serve/base_model_worker.py", line 159, in count_token
2024-02-29 16:08:08 | ERROR | stderr | input_echo_len = self.tokenizer.num_tokens(prompt)
2024-02-29 16:08:08 | ERROR | stderr | AttributeError: 'Encoding' object has no attribute 'num_tokens'
2024-02-29 16:08:47 | INFO | model_worker | Send heart beat. Models: ['Qwen-1_8B-Chat-Int4']. Semaphore: None. call_ct: 0. worker_id: 92230372.
I encountered the same issue in Python3.11.
When I switched my library version to the following, the issue was resolved.
accelerate==0.27.2
aiohttp==3.9.3
aiosignal==1.3.1
anyio==4.3.0
attrs==23.2.0
auto_gptq==0.7.1
certifi==2024.2.2
charset-normalizer==3.3.2
click==8.1.7
datasets==2.18.0
dill==0.3.8
einops==0.7.0
fastapi==0.110.0
filelock==3.13.1
frozenlist==1.4.1
fschat==0.2.26
fsspec==2024.2.0
gekko==1.0.6
h11==0.14.0
httpcore==1.0.4
httptools==0.6.1
httpx==0.27.0
huggingface-hub==0.21.3
idna==3.6
Jinja2==3.1.3
jsonschema==4.21.1
jsonschema-specifications==2023.12.1
markdown-it-py==3.0.0
markdown2==2.4.13
MarkupSafe==2.1.5
mdurl==0.1.2
mpmath==1.3.0
msgpack==1.0.8
multidict==6.0.5
multiprocess==0.70.16
networkx==3.2.1
nh3==0.2.15
ninja==1.11.1.1
numpy==1.26.4
nvidia-cublas-cu12==12.1.3.1
nvidia-cuda-cupti-cu12==12.1.105
nvidia-cuda-nvrtc-cu12==12.1.105
nvidia-cuda-runtime-cu12==12.1.105
nvidia-cudnn-cu12==8.9.2.26
nvidia-cufft-cu12==11.0.2.54
nvidia-curand-cu12==10.3.2.106
nvidia-cusolver-cu12==11.4.5.107
nvidia-cusparse-cu12==12.1.0.106
nvidia-nccl-cu12==2.19.3
nvidia-nvjitlink-cu12==12.3.101
nvidia-nvtx-cu12==12.1.105
openai==0.28.0
packaging==23.2
pandas==2.2.1
peft==0.9.0
prompt-toolkit==3.0.43
protobuf==4.25.3
psutil==5.9.8
pyarrow==15.0.0
pyarrow-hotfix==0.6
pydantic==1.10.13
Pygments==2.17.2
python-dateutil==2.9.0.post0
python-dotenv==1.0.1
pytz==2024.1
PyYAML==6.0.1
ray==2.9.3
referencing==0.33.0
regex==2023.12.25
requests==2.31.0
rich==13.7.1
rouge==1.0.1
rpds-py==0.18.0
safetensors==0.4.2
sentencepiece==0.2.0
shortuuid==1.0.12
six==1.16.0
sniffio==1.3.1
starlette==0.36.3
svgwrite==1.4.3
sympy==1.12
tiktoken==0.6.0
tokenizers==0.15.2
torch==2.2.0
tqdm==4.66.2
transformers==4.38.2
triton==2.2.0
typing_extensions==4.10.0
tzdata==2024.1
urllib3==2.2.1
uvicorn==0.27.1
uvloop==0.19.0
-e git+https://github.com/QwenLM/vllm-gptq.git@9bad890080e915eac8168c055948090c5cff6909#egg=vllm
watchfiles==0.21.0
wavedrom==2.0.3.post3
wcwidth==0.2.13
websockets==12.0
xformers==0.0.24
xxhash==3.4.1
yarl==1.9.4
Downgrading fschat==0.2.36 to fschat==0.2.26 resolved this issue. I guess it is caused by Qwen's tokenizer
Downgrading fschat==0.2.36 to fschat==0.2.26 resolved this issue. I guess it is caused by Qwen's tokenizer
unuseful