When run "rdagent fin_factor", get KeyError
🐛 Bug Description
After setup the RD-Agent on ubuntu, and run "rdagnet fin_facor" get a error" [8:MainThread](2025-06-29 00:41:26,986) INFO - qlib.Initialization - [init.py:74] - qlib successfully initialized based on client settings. [8:MainThread](2025-06-29 00:41:26,986) INFO - qlib.Initialization - [init.py:76] - data_path={'__DEFAULT_FREQ': PosixPath('/root/.qlib/qlib_data/cn_data')} [8:MainThread](2025-06-29 00:42:01,061) ERROR - qlib.workflow - [utils.py:41] - An exception has been raised[KeyError: "['SH600001' 'SH600003' 'SH600005' 'SH600087' 'SH600102'] not in index"].
I got public cn_data by: wget https://github.com/chenditc/investment_data/releases/latest/download/qlib_bin.tar.gz
To Reproduce
Steps to reproduce the behavior:
- set up R&D Agent as document required
- download data by "wget https://github.com/chenditc/investment_data/releases/latest/download/qlib_bin.tar.gz"
- run "rdagent fin_factor"
Expected Behavior
Screenshot
Environment
Note: Users can run rdagent collect_info to get system information and paste it directly here.
-
Name of current operating system: Ubuntu 22.04.5 LTS
-
Processor architecture: x86_64
-
System, version, and hardware information:
-
-Computer-
-
-Processor : Intel(R) Core(TM) i7-4712HQ CPU @ 2.30GHz
-
-Memory : 16277MB (7265MB used)
-
-Machine Type : Portable
-
-Operating System : Ubuntu 22.04.5 LTS
-
--SCSI Disks-
-
-ATA SAMSUNG SSD PM85
-
-SanDisk Extreme 55AE
-
-SanDisk SES Device
-
-
Version number of the system: 22.04 LTS
-
Python version: 3.10.18
-
Container ID: 5b9ee0c00c94
-
Container Name: pedantic_einstein
-
Container Status: Created
-
Image ID used by the container: local_qlib:latest
-
Image tag used by the container:
-
Container port mapping:
-
Container Label:
-
Startup Commands: rdagent fin_factor
-
RD-Agent version:
-
Package version:
Additional Notes
Hi, @xiongsimon , Thanks for the detailed report!
Could you try upgrading to the latest RD-Agent version (or pull the newest code from the repo) and see if the problem still occurs?
If it does, feel free to open a new issue with the updated logs — we’d be happy to help investigate further!