ChatPaper
ChatPaper copied to clipboard
Token长度超过4097
% python chat_paper.py --query "all: causal prompt learning" --filter_keys "causal prompt learning" --max_results 5 --language en
Key word: reinforcement learning Query: all: causal prompt learning Sort: SortCriterion.Relevance all search: 0 Supporting Vision-Language Model Inference with Causality-pruning Knowledge Prompt 2022-05-23 07:51:15+00:00 1 IDIAPers @ Causal News Corpus 2022: Efficient Causal Relation Identification Through a Prompt-based Few-shot Approach 2022-10-14 20:47:37+00:00 2 Prompting through Prototype: A Prototype-based Prompt Learning on Pretrained Vision-Language Models 2022-10-19 19:13:07+00:00 3 Causal Intervention-based Prompt Debiasing for Event Argument Extraction 2022-10-04 12:32:00+00:00 4 Prompt Agnostic Essay Scorer: A Domain Generalization Approach to Cross-prompt Automated Essay Scoring 2020-08-04 10:17:38+00:00 filter_keys: causal prompt learning 筛选后剩下的论文数量: filter_results: 1 filter_papers: 0 Supporting Vision-Language Model Inference with Causality-pruning Knowledge Prompt 2022-05-23 07:51:15+00:00 All_paper: 1 paper_path: ./pdf_files/all causal prompt learni-2023-03-21-08/Supporting Vision-Language Model Inference with Causality-pruning Knowledge Prompt.pdf section_page_dict {'Abstract': 0, 'Introduction': 0, 'Related Work': 1, 'Methodology': 4, 'Method': 8, 'Experiments': 6, 'Conclusion': 7, 'References': 7} 0 Abstract 0 1 Introduction 0 start_page, end_page: 0 1 2 Related Work 1 start_page, end_page: 1 4 3 Methodology 4 start_page, end_page: 4 8 4 Method 8 start_page, end_page: 8 6 5 Experiments 6 start_page, end_page: 6 7 6 Conclusion 7 start_page, end_page: 7 7 7 References 7 start_page, end_page: 7 13 summary_result:
-
Title: Supporting Vision-Language Model Inference with Causality-pruning Knowledge Prompt (Chinese translation: 支持因果削减知识提示的视觉语言模型推理)
-
Authors: Jiangmeng Li, Wenyi Mo, Wenwen Qiang, Bing Su, and Changwen Zheng
-
Affiliation: Institute of Software Chinese Academy of Sciences, Beijing, China (for the first, third, and fifth authors); Renmin University of China, Beijing, China (for the second and fourth authors)
-
Keywords: multi-modal, vision-language model, prompt engineering, causality, knowledge graph, ontology
-
Urls: Paper: http://arxiv.org/abs/2205.11100v1, Github: None
-
Summary:
-
(1): This paper focuses on improving the transferability of pre-trained vision-language models to downstream tasks in a zero-shot manner.
-
(2): Previous works explored generating fixed or learnable prompts to reduce the performance gap between tasks in the training and test phases. However, existing prompt methods do not explore the semantic information of textual labels, and manually constructing prompts with rich semantic information requires domain expertise and is time-consuming. To address this issue, the authors propose the Causality-pruning Knowledge Prompt (CapKP), which retrieves ontological knowledge graphs by treating textual labels as queries and introduces causality-pruning to refine the derived semantic information.
-
(3): The authors conduct extensive evaluations to demonstrate the effectiveness of CapKP in adapting pre-trained vision-language models to downstream image recognition. CapKP outperforms manual-prompt and learnable-prompt methods, achieving superior domain generalization compared to benchmark approaches.
-
(4): The experimental results show that CapKP achieved an improvement of 12.51% and 1.39% on average compared to manual-prompt and learnable-prompt methods, respectively, with 8 shots. The performance supports the effectiveness of CapKP in improving the transferability of pre-trained vision-language models in a zero-shot manner. prompt_token_used: 2279 completion_token_used: 429 total_token_used: 2708 response_time: 16.399 s Traceback (most recent call last): File "/Users/jiangwenzhao/Documents/GitHub/ChatPaper/chat_paper.py", line 469, in
main(args=args)
File "/Users/jiangwenzhao/Documents/GitHub/ChatPaper/chat_paper.py", line 448, in main reader1.summary_with_chat(paper_list=paper_list) File "/Users/jiangwenzhao/Documents/GitHub/ChatPaper/chat_paper.py", line 208, in summary_with_chat chat_method_text = self.chat_method(text=text) File "/Users/jiangwenzhao/opt/anaconda3/envs/chatgpt/lib/python3.9/site-packages/tenacity/init.py", line 289, in wrapped_f return self(f, *args, **kw) File "/Users/jiangwenzhao/opt/anaconda3/envs/chatgpt/lib/python3.9/site-packages/tenacity/init.py", line 379, in call do = self.iter(retry_state=retry_state) File "/Users/jiangwenzhao/opt/anaconda3/envs/chatgpt/lib/python3.9/site-packages/tenacity/init.py", line 325, in iter raise retry_exc.reraise() File "/Users/jiangwenzhao/opt/anaconda3/envs/chatgpt/lib/python3.9/site-packages/tenacity/init.py", line 158, in reraise raise self.last_attempt.result() File "/Users/jiangwenzhao/opt/anaconda3/envs/chatgpt/lib/python3.9/concurrent/futures/_base.py", line 439, in result return self.__get_result() File "/Users/jiangwenzhao/opt/anaconda3/envs/chatgpt/lib/python3.9/concurrent/futures/_base.py", line 391, in __get_result raise self._exception File "/Users/jiangwenzhao/opt/anaconda3/envs/chatgpt/lib/python3.9/site-packages/tenacity/init.py", line 382, in call result = fn(*args, **kwargs) File "/Users/jiangwenzhao/Documents/GitHub/ChatPaper/chat_paper.py", line 324, in chat_method response = openai.ChatCompletion.create( File "/Users/jiangwenzhao/opt/anaconda3/envs/chatgpt/lib/python3.9/site-packages/openai/api_resources/chat_completion.py", line 25, in create return super().create(*args, **kwargs) File "/Users/jiangwenzhao/opt/anaconda3/envs/chatgpt/lib/python3.9/site-packages/openai/api_resources/abstract/engine_api_resource.py", line 153, in create response, _, api_key = requestor.request( File "/Users/jiangwenzhao/opt/anaconda3/envs/chatgpt/lib/python3.9/site-packages/openai/api_requestor.py", line 226, in request resp, got_stream = self._interpret_response(result, stream) File "/Users/jiangwenzhao/opt/anaconda3/envs/chatgpt/lib/python3.9/site-packages/openai/api_requestor.py", line 619, in _interpret_response self._interpret_response_line( File "/Users/jiangwenzhao/opt/anaconda3/envs/chatgpt/lib/python3.9/site-packages/openai/api_requestor.py", line 679, in _interpret_response_line raise self.handle_error_response( openai.error.InvalidRequestError: This model's maximum context length is 4097 tokens. However, your messages resulted in 4132 tokens. Please reduce the length of the messages.
这个问题应该怎么修改代码呢?似乎 openai.ChatCompletion.create() 的 max_token 就是 4097.
你这个是更新了我的最新版代码吗?我昨天尝试做了修改,看来没有完全解决啊
你这个是更新了我的最新版代码吗?我昨天尝试做了修改,看来没有完全解决啊
佬,我今天25号更新的,还是有这个问题,该怎么解决啊
超出不多,换篇文章吧~