[Bug Report] TypeError: data must be str, not dict during fin_quant coding step**
🐛 Bug Description
When running the rdagent fin_quant scenario, a TypeError: data must be str, not dict occurs intermittently. The error consistently happens at the 25% progress mark of the workflow, during the step_name=coding phase.
The root cause appears to be that the agent framework receives a structured dictionary (dict) from the LLM (GPT-4 Turbo) when its file-writing function expects a raw Python code string (str). This happens even with REASONING_THINK_RM=True enabled in the .env file.
To Reproduce
Steps to reproduce the behavior:
- Set up a clean server with Ubuntu 22.04 LTS.
- Clone the latest
mainbranch of the RD-Agent repository. - Create and activate a Conda environment with Python 3.10.
- Install dependencies:
pip install pyqlib, clone theqlibrepository and install from source (pip install -e .), and finally runmake devin the RD-Agent directory. - Configure a
.envfile with a valid OpenAI API key, settingMODEL="gpt-4-turbo",IS_US_STOCK=True, andREASONING_THINK_RM=True. - Prepare a custom Qlib data source using
.binfiles and configure~/.qlib/qlib_config/qlib.yaml(and/root/.qlib/qlib_config/qlib.yaml) to point to it. - Run the command:
conda run -n rdagent rdagent fin_quant. - Observe the potential
TypeErrorat the 25% mark, which can be intermittent.
Expected Behavior
The agent should robustly parse the response from the LLM. It should be able to extract the Python code string, even if the response is a dictionary, and then save it to a file, allowing the workflow to proceed without raising a TypeError.
Screenshot
The error log from the terminal is as follows:
same issue here
是不是很多个版本都有这个问题了?
same issue here by using GPT-5-Nano, hope to be fixed soon
Same issue here when using GPT-4o.
看得懂中文的人 我最后发觉问题出在没有用指定版本的qlib
当然,就算你用了 后面还是会有llm上下文窗口的问题,除非你全程用gemini模型
看得懂中文的人 我最后发觉问题出在没有用指定版本的qlib
当然,就算你用了 后面还是会有llm上下文窗口的问题,除非你全程用gemini模型
请问解决了吗,请教一下解决方法
看得懂中文的人 我最后发觉问题出在没有用指定版本的qlib 当然,就算你用了 后面还是会有llm上下文窗口的问题,除非你全程用gemini模型
请问解决了吗,请教一下解决方法
I've solved this problem by modifing the function of assign_code_list_to_evo under factor_coder's and model_coder's evolving_strategy.py
我是在factor_coder和model_coder各自的evolving_strategy.py里的assign_code_list_to_evo作了修改,解决了这个问题
看得懂中文的人 我最后发觉问题出在没有用指定版本的qlib 当然,就算你用了 后面还是会有llm上下文窗口的问题,除非你全程用gemini模型
请问解决了吗,请教一下解决方法
I've solved this problem by modifing the function of assign_code_list_to_evo under factor_coder's and model_coder's evolving_strategy.py
我是在factor_coder和model_coder各自的evolving_strategy.py里的assign_code_list_to_evo作了修改,解决了这个问题
谢谢 已修复
This issue is fixed in PR 1279, you can install the latest version (0.8.0) of RD-Agent or pull the latest code and retry.