gpt_academic icon indicating copy to clipboard operation
gpt_academic copied to clipboard

运行c++项目解析到最后报错:ValueError: not enough values to unpack (expected 2, got 1)

Open cvt220106 opened this issue 1 year ago • 9 comments

  • (1) Describe the bug 简述 3.1版本 在python main.py时会出现同样报错,但服务能够启动。 但是在服务中运行c++项目解析,解析到最后便会报错:ValueError: not enough values to unpack (expected 2, got 1)

  • (2) Screen Shot 截图 python main.py的截图 U9UZ5@TLUEPLKHMX16J$C c++解析到最后的报错信息 Y 9$ETT Y XS6APDJ`5}B

  • (3) Terminal Traceback 终端traceback(如有) c++解析报错完整traceback Traceback (most recent call last): File "F:\git_clone\GPT_appliacation\chatgpt_academic\crazy_functions\crazy_utils.py", line 202, in _req_gpt gpt_say = predict_no_ui_long_connection( ^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^ File "F:\git_clone\GPT_appliacation\chatgpt_academic\request_llm\bridge_all.py", line 143, in predict_no_ui_long_connection return method(inputs, llm_kwargs, history, sys_prompt, observe_window, console_slience) ^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^ File "F:\git_clone\GPT_appliacation\chatgpt_academic\request_llm\bridge_chatgpt.py", line 85, in predict_no_ui_long_connection raise ConnectionAbortedError("OpenAI拒绝了请求:" + error_msg) ConnectionAbortedError: OpenAI拒绝了请求:{ "error": { "message": "This model's maximum context length is 4097 tokens. However, your messages resulted in 4153 tokens. Please reduce the length of the messages.", "type": "invalid_request_error", "param": "messages", "code": "context_length_exceeded" }}

During handling of the above exception, another exception occurred:

Traceback (most recent call last): File "F:\git_clone\GPT_appliacation\chatgpt_academic\toolbox.py", line 64, in decorated yield from f(txt, top_p, temperature, chatbot, history, systemPromptTxt, WEB_PORT) File "F:\git_clone\GPT_appliacation\chatgpt_academic\crazy_functions\解析项目源代码.py", line 157, in 解析一个C项目 yield from 解析源代码新(file_manifest, project_folder, llm_kwargs, plugin_kwargs, chatbot, history, system_prompt) File "F:\git_clone\GPT_appliacation\chatgpt_academic\crazy_functions\解析项目源代码.py", line 31, in 解析源代码新 gpt_response_collection = yield from request_gpt_model_multi_threads_with_very_awesome_ui_and_high_efficiency( ^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^ File "F:\git_clone\GPT_appliacation\chatgpt_academic\crazy_functions\crazy_utils.py", line 288, in request_gpt_model_multi_threads_with_very_awesome_ui_and_high_efficiency gpt_res = f.result() ^^^^^^^^^^ File "F:\opt\python311\Lib\concurrent\futures_base.py", line 449, in result return self.__get_result() ^^^^^^^^^^^^^^^^^^^ File "F:\opt\python311\Lib\concurrent\futures_base.py", line 401, in __get_result raise self._exception File "F:\opt\python311\Lib\concurrent\futures\thread.py", line 58, in run result = self.fn(*self.args, **self.kwargs) ^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^ File "F:\git_clone\GPT_appliacation\chatgpt_academic\crazy_functions\crazy_utils.py", line 217, in _req_gpt inputs, history = input_clipping(inputs, history, max_token_limit=MAX_TOKEN-EXCEED_ALLO) ^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^ File "F:\git_clone\GPT_appliacation\chatgpt_academic\crazy_functions\crazy_utils.py", line 12, in input_clipping input_token_num = get_token_num(inputs) ^^^^^^^^^^^^^^^^^^^^^ File "F:\git_clone\GPT_appliacation\chatgpt_academic\crazy_functions\crazy_utils.py", line 8, in get_token_num def get_token_num(txt): return len(enc.encode(txt, disallowed_special=())) ^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^ File "F:\git_clone\GPT_appliacation\chatgpt_academic\request_llm\bridge_all.py", line 39, in encode encoder = self.get_encoder(self.model) ^^^^^^^^^^^^^^^^^^^^^^^^^^^^ File "F:\git_clone\GPT_appliacation\chatgpt_academic\request_llm\bridge_all.py", line 34, in get_encoder tmp = tiktoken.encoding_for_model(model) ^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^ File "F:\opt\python311\Lib\site-packages\tiktoken\model.py", line 75, in encoding_for_model return get_encoding(encoding_name) ^^^^^^^^^^^^^^^^^^^^^^^^^^^ File "F:\opt\python311\Lib\site-packages\tiktoken\registry.py", line 63, in get_encoding enc = Encoding(**constructor()) ^^^^^^^^^^^^^ File "F:\opt\python311\Lib\site-packages\tiktoken_ext\openai_public.py", line 64, in cl100k_base mergeable_ranks = load_tiktoken_bpe( ^^^^^^^^^^^^^^^^^^ File "F:\opt\python311\Lib\site-packages\tiktoken\load.py", line 115, in load_tiktoken_bpe return { ^ File "F:\opt\python311\Lib\site-packages\tiktoken\load.py", line 117, in for token, rank in (line.split() for line in contents.splitlines() if line)

  • (4) Material to Help Reproduce Bugs 帮助我们复现的测试材料样本(如有)

Before submitting an issue 提交issue之前:

  • Please try to upgrade your code. 如果您的代码不是最新的,建议您先尝试更新代码
  • Please check project wiki for common problem solutions.项目wiki有一些常见问题的解决方法

cvt220106 avatar Apr 20 '23 16:04 cvt220106

tiktoken版本?

binary-husky avatar Apr 21 '23 04:04 binary-husky

你这个启动的时候就已经开始报错了嘛

binary-husky avatar Apr 21 '23 04:04 binary-husky

@cvt220106

  1. 运行python -c 'from check_proxy import warm_up_modules; warm_up_modules()',查看报错
  2. 查看tiktoken版本
  3. 您使用的pip源是什么,如果不是官方的,报出来让大家拉黑

binary-husky avatar Apr 21 '23 04:04 binary-husky

@binary-husky

  1. tiktoken版本0.3.3
  2. pip清华源 [global] index-url = https://pypi.tuna.tsinghua.edu.cn/simple [install] trusted-host = pypi.tuna.tsinghua.edu.cn
  3. python -c 'from check_proxy import warm_up_modules; warm_up_modules()'报错信息 None of PyTorch, TensorFlow >= 2.0, or Flax have been found. Models won't be available and only tokenizers, configuration and file/data utilities can be used. 正在加载tokenizer,如果是第一次运行,可能需要一点时间下载参数 Traceback (most recent call last): File "", line 1, in File "F:\git_clone\GPT_appliacation\chatgpt_academic\check_proxy.py", line 140, in warm_up_modules enc.encode("模块预热", disallowed_special=()) File "F:\git_clone\GPT_appliacation\chatgpt_academic\request_llm\bridge_all.py", line 39, in encode encoder = self.get_encoder(self.model) ^^^^^^^^^^^^^^^^^^^^^^^^^^^^ File "F:\git_clone\GPT_appliacation\chatgpt_academic\request_llm\bridge_all.py", line 34, in get_encoder tmp = tiktoken.encoding_for_model(model) ^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^ File "F:\opt\python311\Lib\site-packages\tiktoken\model.py", line 75, in encoding_for_model return get_encoding(encoding_name) ^^^^^^^^^^^^^^^^^^^^^^^^^^^ File "F:\opt\python311\Lib\site-packages\tiktoken\registry.py", line 63, in get_encoding enc = Encoding(**constructor()) ^^^^^^^^^^^^^ File "F:\opt\python311\Lib\site-packages\tiktoken_ext\openai_public.py", line 64, in cl100k_base mergeable_ranks = load_tiktoken_bpe( ^^^^^^^^^^^^^^^^^^ File "F:\opt\python311\Lib\site-packages\tiktoken\load.py", line 115, in load_tiktoken_bpe return { ^ File "F:\opt\python311\Lib\site-packages\tiktoken\load.py", line 117, in for token, rank in (line.split() for line in contents.splitlines() if line) ^^^^^^^^^^^ ValueError: not enough values to unpack (expected 2, got 1)

cvt220106 avatar Apr 21 '23 08:04 cvt220106

@cvt220106 @Alethiea-Cheng

读了一下代码,应该是缓存的问题 在电脑上搜索data-gym-cache这个路径,然后删除即可 image

binary-husky avatar Apr 21 '23 10:04 binary-husky

image Linux的应该在这里

binary-husky avatar Apr 21 '23 10:04 binary-husky

@cvt220106 @Alethiea-Cheng

读了一下代码,应该是缓存的问题 在电脑上搜索data-gym-cache这个路径,然后删除即可 image

可行! 解決了 謝謝哥 弱弱的問一下是讀哪一份代碼發現是緩存問題的

Alethiea-Cheng avatar Apr 21 '23 11:04 Alethiea-Cheng

@cvt220106 @Alethiea-Cheng 读了一下代码,应该是缓存的问题 在电脑上搜索data-gym-cache这个路径,然后删除即可 image

可行! 解決了 謝謝哥 弱弱的問一下是讀哪一份代碼發現是緩存問題的

就是traceback里面出现的函数代码,

binary-husky avatar Apr 21 '23 12:04 binary-husky

@cvt220106 @Alethiea-Cheng 读了一下代码,应该是缓存的问题 在电脑上搜索data-gym-cache这个路径,然后删除即可 image

可行! 解決了 謝謝哥 弱弱的問一下是讀哪一份代碼發現是緩存問題的

就是traceback里面出现的函数代码, image

嗯,成功解决了,谢谢

cvt220106 avatar Apr 22 '23 04:04 cvt220106