执行构建agent代码的时候报错如下:Qwen2TokenizerFast has no attribute "get _token_id" . Did you mean:"sep_token_id"
Self Checks
- [X] This template is only for bug reports. For questions, please visit Discussions.
- [X] I have thoroughly reviewed the project documentation (installation, training, inference) but couldn't find information to solve my problem. English 中文 日本語 Portuguese (Brazil)
- [X] I have searched for existing issues, including closed ones. Search issues
- [X] I confirm that I am using English to submit this report (我已阅读并同意 Language Policy).
- [X] [FOR CHINESE USERS] 请务必使用英文提交 Issue,否则会被关闭。谢谢!:)
- [X] Please do not modify this template and fill in all required fields.
Cloud or Self Hosted
Self Hosted (Source)
Environment Details
python 3.10 fish-speech-1.5 windows
Steps to Reproduce
When I python -m tools.api --llama-checkpoint-path checkpoints/fish-agent-v0.1-3b/ --mode agent --compile , AttributeError: Qwen2TokenizerFast has no attribute "get _token_id" . Did you mean:"sep_token_id"
✔️ Expected Behavior
No response
❌ Actual Behavior
No response
same error occured.
先用1.4版本
Self Checks
- [x] This template is only for bug reports. For questions, please visit Discussions.
- [x] I have thoroughly reviewed the project documentation (installation, training, inference) but couldn't find information to solve my problem. English 中文 日本語 Portuguese (Brazil)
- [x] I have searched for existing issues, including closed ones. Search issues
- [x] I confirm that I am using English to submit this report (我已阅读并同意 Language Policy).
- [x] [FOR CHINESE USERS] 请务必使用英文提交 Issue,否则会被关闭。谢谢!:)
- [x] Please do not modify this template and fill in all required fields.
Cloud or Self Hosted
Self Hosted (Source)
Environment Details
python 3.10 fish-speech-1.5 windows
Steps to Reproduce
When I python -m tools.api --llama-checkpoint-path checkpoints/fish-agent-v0.1-3b/ --mode agent --compile , AttributeError: Qwen2TokenizerFast has no attribute "get _token_id" . Did you mean:"sep_token_id"
✔️ Expected Behavior
No response
❌ Actual Behavior
No response
Please use version 1.4 of the code, or you can refer to version 1.4 to modify the fish_stpeech/tokenizer. py file
Hello, fish-speech is an amazing library, and I truly appreciate your work. I have also encountered the same error with the latest version.
(fish-speech) root@3aac063e8786:/app/data/zhouwen/fish-speech# python -m tools.api_server --llama-checkpoint-path /app/data/zhouwen/fish-speech/checkpoints/fish-agent-v0.1-3b --mode agent --compile python -m tools.api_server --llama-checkpoint-path /app/data/zhouwen/fish-speech/checkpoints/fish-agent-v0.1-3b --mode agent --compile
INFO: Started server process [3919003]
INFO: Waiting for application startup.
Exception in thread Thread-1 (worker):
Traceback (most recent call last):
File "/app/data/zhouwen/Anaconda3/envs/fish-speech/lib/python3.10/threading.py", line 1016, in _bootstrap_inner
self.run()
File "/app/data/zhouwen/Anaconda3/envs/fish-speech/lib/python3.10/threading.py", line 953, in run
self._target(*self._args, **self._kwargs)
File "/app/data/zhouwen/fish-speech/tools/llama/generate.py", line 960, in worker
model, decode_one_token = load_model(
File "/app/data/zhouwen/fish-speech/tools/llama/generate.py", line 677, in load_model
model: Union[NaiveTransformer, DualARTransformer] = BaseTransformer.from_pretrained(
File "/app/data/zhouwen/fish-speech/fish_speech/models/text2semantic/llama.py", line 411, in from_pretrained
model = model_cls(config, tokenizer=tokenizer)
File "/app/data/zhouwen/fish-speech/fish_speech/models/text2semantic/llama.py", line 542, in init
super().init(config, init_weights=False, tokenizer=tokenizer)
File "/app/data/zhouwen/fish-speech/fish_speech/models/text2semantic/llama.py", line 176, in init
self.semantic_token_ids = [
File "/app/data/zhouwen/fish-speech/fish_speech/models/text2semantic/llama.py", line 177, in
Following your previous responses, I tried using version 1.4.0, but the error still persists.
(fish-speech) root@3aac063e8786:/app/data/zhouwen/fish-speech-1.4.0# python -m tools.api --llama-checkpoint-path /app/data/zhouwen/fish-speech-1.4.0/checkpoints/fish-agent-v0.1-3b --compile
2024-12-09 06:17:49.674 | INFO | main:
I look forward to your reply.
1.4.3
Thank you so much!!! It works now.
@AnyaCoder Hello, first of all, let me thank you for your support. I meet the same issue (AttributeError: 'Qwen2TokenizerFast' object has no attribute 'semantic_id_to_token_id'), when run agent-v0.1-3b, based on speech-1.5.
For Sever side, clone latest code so far, and copy tokenizer.tiktoken form speech-1.5 to agent-v0.1-3b python -m tools.api_server --llama-checkpoint-path checkpoints/fish-agent-v0.1-3b/ --mode agent --compile
Error log was showed as below,
2024-12-26 15:19:02.000 | INFO | tools.server.views:vqgan_encode:53 - [EXEC] VQGAN encode time: 467.96ms
INFO: 127.0.0.1:47610 - "POST /v1/vqgan/encode HTTP/1.1" 200 OK
INFO: 127.0.0.1:47610 - "POST /v1/chat HTTP/1.1" 200 OK
ERROR: Exception in ASGI application
Traceback (most recent call last):
File "/home/harr/anaconda3/envs/fish/lib/python3.10/site-packages/uvicorn/protocols/http/h11_impl.py", line 403, in run_asgi
result = await app( # type: ignore[func-returns-value]
File "/home/harr/anaconda3/envs/fish/lib/python3.10/site-packages/uvicorn/middleware/proxy_headers.py", line 60, in call
return await self.app(scope, receive, send)
File "/home/harr/anaconda3/envs/fish/lib/python3.10/site-packages/kui/asgi/applications.py", line 163, in call
await self.app(scope, receive, send)
File "/home/harr/anaconda3/envs/fish/lib/python3.10/site-packages/kui/asgi/applications.py", line 118, in app
return await getattr(self, scope_type)(scope, receive, send)
File "/home/harr/anaconda3/envs/fish/lib/python3.10/site-packages/kui/asgi/applications.py", line 142, in http
return await response(scope, receive, send)
File "/home/harr/anaconda3/envs/fish/lib/python3.10/site-packages/baize/asgi/responses.py", line 167, in call
chunk = await generator.asend(None)
File "/home/harr/anaconda3/envs/fish/lib/python3.10/site-packages/baize/asgi/responses.py", line 191, in render_stream
async for chunk in self.iterable:
File "/media/harr/King/fish-speech-1.5.0/tools/server/agent/init.py", line 37, in streaming_generator
for i in generator:
File "/media/harr/King/fish-speech-1.5.0/tools/server/agent/init.py", line 16, in execute_request
prompt, im_end_id = prepare_messages(request, tokenizer, config)
File "/media/harr/King/fish-speech-1.5.0/tools/server/agent/pre_generation_utils.py", line 41, in prepare_messages
prompt = conv.encode_for_inference(
File "/media/harr/King/fish-speech-1.5.0/fish_speech/conversation.py", line 185, in encode_for_inference
encoded = self.encode(tokenizer, add_shift=False)
File "/media/harr/King/fish-speech-1.5.0/fish_speech/conversation.py", line 138, in encode
encoded = message.encode(
File "/media/harr/King/fish-speech-1.5.0/fish_speech/conversation.py", line 76, in encode
[
File "/media/harr/King/fish-speech-1.5.0/fish_speech/conversation.py", line 77, in
For client side, python -m tools.e2e_webui
Error log was showed as below: File "/home/harr/anaconda3/envs/fish/lib/python3.10/site-packages/httpx/_transports/default.py", line 101, in map_httpcore_exceptions yield File "/home/harr/anaconda3/envs/fish/lib/python3.10/site-packages/httpx/_transports/default.py", line 271, in aiter async for part in self._httpcore_stream: File "/home/harr/anaconda3/envs/fish/lib/python3.10/site-packages/httpcore/_async/connection_pool.py", line 407, in aiter raise exc from None File "/home/harr/anaconda3/envs/fish/lib/python3.10/site-packages/httpcore/_async/connection_pool.py", line 403, in aiter async for part in self._stream: File "/home/harr/anaconda3/envs/fish/lib/python3.10/site-packages/httpcore/_async/http11.py", line 342, in aiter raise exc File "/home/harr/anaconda3/envs/fish/lib/python3.10/site-packages/httpcore/_async/http11.py", line 334, in aiter async for chunk in self._connection._receive_response_body(**kwargs): File "/home/harr/anaconda3/envs/fish/lib/python3.10/site-packages/httpcore/_async/http11.py", line 203, in _receive_response_body event = await self._receive_event(timeout=timeout) File "/home/harr/anaconda3/envs/fish/lib/python3.10/site-packages/httpcore/_async/http11.py", line 213, in _receive_event with map_exceptions({h11.RemoteProtocolError: RemoteProtocolError}): File "/home/harr/anaconda3/envs/fish/lib/python3.10/contextlib.py", line 153, in exit self.gen.throw(typ, value, traceback) File "/home/harr/anaconda3/envs/fish/lib/python3.10/site-packages/httpcore/_exceptions.py", line 14, in map_exceptions raise to_exc(exc) from exc httpcore.RemoteProtocolError: peer closed connection without sending complete message body (incomplete chunked read)
The above exception was the direct cause of the following exception:
Traceback (most recent call last): File "/home/harr/anaconda3/envs/fish/lib/python3.10/site-packages/gradio/queueing.py", line 625, in process_events response = await route_utils.call_process_api( File "/home/harr/anaconda3/envs/fish/lib/python3.10/site-packages/gradio/route_utils.py", line 322, in call_process_api output = await app.get_blocks().process_api( File "/home/harr/anaconda3/envs/fish/lib/python3.10/site-packages/gradio/blocks.py", line 2047, in process_api result = await self.call_function( File "/home/harr/anaconda3/envs/fish/lib/python3.10/site-packages/gradio/blocks.py", line 1606, in call_function prediction = await utils.async_iteration(iterator) File "/home/harr/anaconda3/envs/fish/lib/python3.10/site-packages/gradio/utils.py", line 714, in async_iteration return await anext(iterator) File "/home/harr/anaconda3/envs/fish/lib/python3.10/site-packages/gradio/utils.py", line 819, in asyncgen_wrapper response = await iterator.anext() File "/media/harr/King/fish-speech-1.5.0/tools/e2e_webui.py", line 100, in process_audio_input async for event in agent.stream( File "/media/harr/King/fish-speech-1.5.0/tools/fish_e2e.py", line 230, in stream async for chunk in response.aiter_bytes(): File "/home/harr/anaconda3/envs/fish/lib/python3.10/site-packages/httpx/_models.py", line 997, in aiter_bytes async for raw_bytes in self.aiter_raw(): File "/home/harr/anaconda3/envs/fish/lib/python3.10/site-packages/httpx/_models.py", line 1055, in aiter_raw async for raw_stream_bytes in self.stream: File "/home/harr/anaconda3/envs/fish/lib/python3.10/site-packages/httpx/_client.py", line 176, in aiter async for chunk in self._stream: File "/home/harr/anaconda3/envs/fish/lib/python3.10/site-packages/httpx/_transports/default.py", line 270, in aiter with map_httpcore_exceptions(): File "/home/harr/anaconda3/envs/fish/lib/python3.10/contextlib.py", line 153, in exit self.gen.throw(typ, value, traceback) File "/home/harr/anaconda3/envs/fish/lib/python3.10/site-packages/httpx/_transports/default.py", line 118, in map_httpcore_exceptions raise mapped_exc(message) from exc httpx.RemoteProtocolError: peer closed connection without sending complete message body (incomplete chunked read)
I look forward to your reply.
@AnyaCoder Hello, first of all, let me thank you for your support. I meet the same issue (AttributeError: 'Qwen2TokenizerFast' object has no attribute 'semantic_id_to_token_id'), when run agent-v0.1-3b, based on speech-1.5.
For Sever side, clone latest code so far, and copy tokenizer.tiktoken form speech-1.5 to agent-v0.1-3b python -m tools.api_server --llama-checkpoint-path checkpoints/fish-agent-v0.1-3b/ --mode agent --compile
Error log was showed as below, 2024-12-26 15:19:02.000 | INFO | tools.server.views:vqgan_encode:53 - [EXEC] VQGAN encode time: 467.96ms INFO: 127.0.0.1:47610 - "POST /v1/vqgan/encode HTTP/1.1" 200 OK INFO: 127.0.0.1:47610 - "POST /v1/chat HTTP/1.1" 200 OK ERROR: Exception in ASGI application Traceback (most recent call last): File "/home/harr/anaconda3/envs/fish/lib/python3.10/site-packages/uvicorn/protocols/http/h11_impl.py", line 403, in run_asgi result = await app( # type: ignore[func-returns-value] File "/home/harr/anaconda3/envs/fish/lib/python3.10/site-packages/uvicorn/middleware/proxy_headers.py", line 60, in call return await self.app(scope, receive, send) File "/home/harr/anaconda3/envs/fish/lib/python3.10/site-packages/kui/asgi/applications.py", line 163, in call await self.app(scope, receive, send) File "/home/harr/anaconda3/envs/fish/lib/python3.10/site-packages/kui/asgi/applications.py", line 118, in app return await getattr(self, scope_type)(scope, receive, send) File "/home/harr/anaconda3/envs/fish/lib/python3.10/site-packages/kui/asgi/applications.py", line 142, in http return await response(scope, receive, send) File "/home/harr/anaconda3/envs/fish/lib/python3.10/site-packages/baize/asgi/responses.py", line 167, in call chunk = await generator.asend(None) File "/home/harr/anaconda3/envs/fish/lib/python3.10/site-packages/baize/asgi/responses.py", line 191, in render_stream async for chunk in self.iterable: File "/media/harr/King/fish-speech-1.5.0/tools/server/agent/init.py", line 37, in streaming_generator for i in generator: File "/media/harr/King/fish-speech-1.5.0/tools/server/agent/init.py", line 16, in execute_request prompt, im_end_id = prepare_messages(request, tokenizer, config) File "/media/harr/King/fish-speech-1.5.0/tools/server/agent/pre_generation_utils.py", line 41, in prepare_messages prompt = conv.encode_for_inference( File "/media/harr/King/fish-speech-1.5.0/fish_speech/conversation.py", line 185, in encode_for_inference encoded = self.encode(tokenizer, add_shift=False) File "/media/harr/King/fish-speech-1.5.0/fish_speech/conversation.py", line 138, in encode encoded = message.encode( File "/media/harr/King/fish-speech-1.5.0/fish_speech/conversation.py", line 76, in encode [ File "/media/harr/King/fish-speech-1.5.0/fish_speech/conversation.py", line 77, in
tokenizer.semantic_id_to_token_id[i.item()] AttributeError: 'Qwen2TokenizerFast' object has no attribute 'semantic_id_to_token_id' For client side, python -m tools.e2e_webui
Error log was showed as below: File "/home/harr/anaconda3/envs/fish/lib/python3.10/site-packages/httpx/_transports/default.py", line 101, in map_httpcore_exceptions yield File "/home/harr/anaconda3/envs/fish/lib/python3.10/site-packages/httpx/_transports/default.py", line 271, in aiter async for part in self._httpcore_stream: File "/home/harr/anaconda3/envs/fish/lib/python3.10/site-packages/httpcore/_async/connection_pool.py", line 407, in aiter raise exc from None File "/home/harr/anaconda3/envs/fish/lib/python3.10/site-packages/httpcore/_async/connection_pool.py", line 403, in aiter async for part in self._stream: File "/home/harr/anaconda3/envs/fish/lib/python3.10/site-packages/httpcore/_async/http11.py", line 342, in aiter raise exc File "/home/harr/anaconda3/envs/fish/lib/python3.10/site-packages/httpcore/_async/http11.py", line 334, in aiter async for chunk in self._connection._receive_response_body(**kwargs): File "/home/harr/anaconda3/envs/fish/lib/python3.10/site-packages/httpcore/_async/http11.py", line 203, in _receive_response_body event = await self._receive_event(timeout=timeout) File "/home/harr/anaconda3/envs/fish/lib/python3.10/site-packages/httpcore/_async/http11.py", line 213, in _receive_event with map_exceptions({h11.RemoteProtocolError: RemoteProtocolError}): File "/home/harr/anaconda3/envs/fish/lib/python3.10/contextlib.py", line 153, in exit self.gen.throw(typ, value, traceback) File "/home/harr/anaconda3/envs/fish/lib/python3.10/site-packages/httpcore/_exceptions.py", line 14, in map_exceptions raise to_exc(exc) from exc httpcore.RemoteProtocolError: peer closed connection without sending complete message body (incomplete chunked read)
The above exception was the direct cause of the following exception:
Traceback (most recent call last): File "/home/harr/anaconda3/envs/fish/lib/python3.10/site-packages/gradio/queueing.py", line 625, in process_events response = await route_utils.call_process_api( File "/home/harr/anaconda3/envs/fish/lib/python3.10/site-packages/gradio/route_utils.py", line 322, in call_process_api output = await app.get_blocks().process_api( File "/home/harr/anaconda3/envs/fish/lib/python3.10/site-packages/gradio/blocks.py", line 2047, in process_api result = await self.call_function( File "/home/harr/anaconda3/envs/fish/lib/python3.10/site-packages/gradio/blocks.py", line 1606, in call_function prediction = await utils.async_iteration(iterator) File "/home/harr/anaconda3/envs/fish/lib/python3.10/site-packages/gradio/utils.py", line 714, in async_iteration return await anext(iterator) File "/home/harr/anaconda3/envs/fish/lib/python3.10/site-packages/gradio/utils.py", line 819, in asyncgen_wrapper response = await iterator.anext() File "/media/harr/King/fish-speech-1.5.0/tools/e2e_webui.py", line 100, in process_audio_input async for event in agent.stream( File "/media/harr/King/fish-speech-1.5.0/tools/fish_e2e.py", line 230, in stream async for chunk in response.aiter_bytes(): File "/home/harr/anaconda3/envs/fish/lib/python3.10/site-packages/httpx/_models.py", line 997, in aiter_bytes async for raw_bytes in self.aiter_raw(): File "/home/harr/anaconda3/envs/fish/lib/python3.10/site-packages/httpx/_models.py", line 1055, in aiter_raw async for raw_stream_bytes in self.stream: File "/home/harr/anaconda3/envs/fish/lib/python3.10/site-packages/httpx/_client.py", line 176, in aiter async for chunk in self._stream: File "/home/harr/anaconda3/envs/fish/lib/python3.10/site-packages/httpx/_transports/default.py", line 270, in aiter with map_httpcore_exceptions(): File "/home/harr/anaconda3/envs/fish/lib/python3.10/contextlib.py", line 153, in exit self.gen.throw(typ, value, traceback) File "/home/harr/anaconda3/envs/fish/lib/python3.10/site-packages/httpx/_transports/default.py", line 118, in map_httpcore_exceptions raise mapped_exc(message) from exc httpx.RemoteProtocolError: peer closed connection without sending complete message body (incomplete chunked read)
I look forward to your reply.
Try using command line script reasoning to see if the problem is repeated
@AnyaCoder Hello, first of all, let me thank you for your support. I meet the same issue (AttributeError: 'Qwen2TokenizerFast' object has no attribute 'semantic_id_to_token_id'), when run agent-v0.1-3b, based on speech-1.5. For Sever side, clone latest code so far, and copy tokenizer.tiktoken form speech-1.5 to agent-v0.1-3b python -m tools.api_server --llama-checkpoint-path checkpoints/fish-agent-v0.1-3b/ --mode agent --compile Error log was showed as below, 2024-12-26 15:19:02.000 | INFO | tools.server.views:vqgan_encode:53 - [EXEC] VQGAN encode time: 467.96ms INFO: 127.0.0.1:47610 - "POST /v1/vqgan/encode HTTP/1.1" 200 OK INFO: 127.0.0.1:47610 - "POST /v1/chat HTTP/1.1" 200 OK ERROR: Exception in ASGI application Traceback (most recent call last): File "/home/harr/anaconda3/envs/fish/lib/python3.10/site-packages/uvicorn/protocols/http/h11_impl.py", line 403, in run_asgi result = await app( # type: ignore[func-returns-value] File "/home/harr/anaconda3/envs/fish/lib/python3.10/site-packages/uvicorn/middleware/proxy_headers.py", line 60, in call return await self.app(scope, receive, send) File "/home/harr/anaconda3/envs/fish/lib/python3.10/site-packages/kui/asgi/applications.py", line 163, in call await self.app(scope, receive, send) File "/home/harr/anaconda3/envs/fish/lib/python3.10/site-packages/kui/asgi/applications.py", line 118, in app return await getattr(self, scope_type)(scope, receive, send) File "/home/harr/anaconda3/envs/fish/lib/python3.10/site-packages/kui/asgi/applications.py", line 142, in http return await response(scope, receive, send) File "/home/harr/anaconda3/envs/fish/lib/python3.10/site-packages/baize/asgi/responses.py", line 167, in call chunk = await generator.asend(None) File "/home/harr/anaconda3/envs/fish/lib/python3.10/site-packages/baize/asgi/responses.py", line 191, in render_stream async for chunk in self.iterable: File "/media/harr/King/fish-speech-1.5.0/tools/server/agent/init.py", line 37, in streaming_generator for i in generator: File "/media/harr/King/fish-speech-1.5.0/tools/server/agent/init.py", line 16, in execute_request prompt, im_end_id = prepare_messages(request, tokenizer, config) File "/media/harr/King/fish-speech-1.5.0/tools/server/agent/pre_generation_utils.py", line 41, in prepare_messages prompt = conv.encode_for_inference( File "/media/harr/King/fish-speech-1.5.0/fish_speech/conversation.py", line 185, in encode_for_inference encoded = self.encode(tokenizer, add_shift=False) File "/media/harr/King/fish-speech-1.5.0/fish_speech/conversation.py", line 138, in encode encoded = message.encode( File "/media/harr/King/fish-speech-1.5.0/fish_speech/conversation.py", line 76, in encode [ File "/media/harr/King/fish-speech-1.5.0/fish_speech/conversation.py", line 77, in tokenizer.semantic_id_to_token_id[i.item()] AttributeError: 'Qwen2TokenizerFast' object has no attribute 'semantic_id_to_token_id' For client side, python -m tools.e2e_webui Error log was showed as below: File "/home/harr/anaconda3/envs/fish/lib/python3.10/site-packages/httpx/_transports/default.py", line 101, in map_httpcore_exceptions yield File "/home/harr/anaconda3/envs/fish/lib/python3.10/site-packages/httpx/_transports/default.py", line 271, in aiter async for part in self._httpcore_stream: File "/home/harr/anaconda3/envs/fish/lib/python3.10/site-packages/httpcore/_async/connection_pool.py", line 407, in aiter raise exc from None File "/home/harr/anaconda3/envs/fish/lib/python3.10/site-packages/httpcore/_async/connection_pool.py", line 403, in aiter async for part in self._stream: File "/home/harr/anaconda3/envs/fish/lib/python3.10/site-packages/httpcore/_async/http11.py", line 342, in aiter raise exc File "/home/harr/anaconda3/envs/fish/lib/python3.10/site-packages/httpcore/_async/http11.py", line 334, in aiter async for chunk in self._connection._receive_response_body(**kwargs): File "/home/harr/anaconda3/envs/fish/lib/python3.10/site-packages/httpcore/_async/http11.py", line 203, in _receive_response_body event = await self._receive_event(timeout=timeout) File "/home/harr/anaconda3/envs/fish/lib/python3.10/site-packages/httpcore/_async/http11.py", line 213, in _receive_event with map_exceptions({h11.RemoteProtocolError: RemoteProtocolError}): File "/home/harr/anaconda3/envs/fish/lib/python3.10/contextlib.py", line 153, in exit self.gen.throw(typ, value, traceback) File "/home/harr/anaconda3/envs/fish/lib/python3.10/site-packages/httpcore/_exceptions.py", line 14, in map_exceptions raise to_exc(exc) from exc httpcore.RemoteProtocolError: peer closed connection without sending complete message body (incomplete chunked read) The above exception was the direct cause of the following exception: Traceback (most recent call last): File "/home/harr/anaconda3/envs/fish/lib/python3.10/site-packages/gradio/queueing.py", line 625, in process_events response = await route_utils.call_process_api( File "/home/harr/anaconda3/envs/fish/lib/python3.10/site-packages/gradio/route_utils.py", line 322, in call_process_api output = await app.get_blocks().process_api( File "/home/harr/anaconda3/envs/fish/lib/python3.10/site-packages/gradio/blocks.py", line 2047, in process_api result = await self.call_function( File "/home/harr/anaconda3/envs/fish/lib/python3.10/site-packages/gradio/blocks.py", line 1606, in call_function prediction = await utils.async_iteration(iterator) File "/home/harr/anaconda3/envs/fish/lib/python3.10/site-packages/gradio/utils.py", line 714, in async_iteration return await anext(iterator) File "/home/harr/anaconda3/envs/fish/lib/python3.10/site-packages/gradio/utils.py", line 819, in asyncgen_wrapper response = await iterator.anext() File "/media/harr/King/fish-speech-1.5.0/tools/e2e_webui.py", line 100, in process_audio_input async for event in agent.stream( File "/media/harr/King/fish-speech-1.5.0/tools/fish_e2e.py", line 230, in stream async for chunk in response.aiter_bytes(): File "/home/harr/anaconda3/envs/fish/lib/python3.10/site-packages/httpx/_models.py", line 997, in aiter_bytes async for raw_bytes in self.aiter_raw(): File "/home/harr/anaconda3/envs/fish/lib/python3.10/site-packages/httpx/_models.py", line 1055, in aiter_raw async for raw_stream_bytes in self.stream: File "/home/harr/anaconda3/envs/fish/lib/python3.10/site-packages/httpx/_client.py", line 176, in aiter async for chunk in self._stream: File "/home/harr/anaconda3/envs/fish/lib/python3.10/site-packages/httpx/_transports/default.py", line 270, in aiter with map_httpcore_exceptions(): File "/home/harr/anaconda3/envs/fish/lib/python3.10/contextlib.py", line 153, in exit self.gen.throw(typ, value, traceback) File "/home/harr/anaconda3/envs/fish/lib/python3.10/site-packages/httpx/_transports/default.py", line 118, in map_httpcore_exceptions raise mapped_exc(message) from exc httpx.RemoteProtocolError: peer closed connection without sending complete message body (incomplete chunked read) I look forward to your reply.
Try using command line script reasoning to see if the problem is repeated
It's repeat 100%, when first record voice on Gradio WebUI, and can NOT get more useful information from script. Start up server and WebUI, by follow docs/zh/start_agent.md.
Expectation: Successfully record voice on WebUI, then send voice to server to process. Iusse: After startup Gradio WebUI and stop vocie record. 4 error display on multiple label. At the same time, get error related log in terminals.
This issue is stale because it has been open for 30 days with no activity.
Did anyone manage to fix this? Still an issue for me
Did anyone manage to fix this? Still an issue for me
Please update the code repository
same
same error here
Please use version 1.4 of the code, or you can refer to version 1.4 to modify the fish_stpeech/tokenizer. py file
How I do that ? I mean refer to 1.4 , in which file , where ?
Please use version 1.4 of the code, or you can refer to version 1.4 to modify the fish_stpeech/tokenizer. py file
How I do that ? I mean refer to 1.4 , in which file , where ?
Please update all files, the two versions are incompatible.
Please use version 1.4 of the code, or you can refer to version 1.4 to modify the fish_stpeech/tokenizer. py file
How I do that ? I mean refer to 1.4 , in which file , where ?
Please update all files, the two versions are incompatible.
Again, what do you mean by update all the files ? Installed fish-speech the day before, and wanted to try fish-speech-agent , and this error popped up. I am coming from a previous version or using altered files. Do you mean we should use older versions of files , than which files ?
Please use version 1.4 of the code, or you can refer to version 1.4 to modify the fish_stpeech/tokenizer. py file
How I do that ? I mean refer to 1.4 , in which file , where ?
Please update all files, the two versions are incompatible.
Again, what do you mean by update all the files ? Installed fish-speech the day before, and wanted to try fish-speech-agent , and this error popped up. I am coming from a previous version or using altered files. Do you mean we should use older versions of files , than which files ?
Firstly, some people are unable to run fish speed agent in version 1.5, so it is recommended to use version 1.4 to run it (as some parts of the code have been modified but not the agent's code)
use v1.4.3 for stable agent use.