[Feature Request]: Parallelize RAPTOR to run for multiple files at 1 go
Is there an existing issue for the same feature request?
- [X] I have checked the existing issues.
Is your feature request related to a problem?
No response
Describe the feature you'd like
Currently, RAPTOR is running one by one for each file.
It takes a long time and CPU is still very idle
Below 2 images show bottom half number of files are 50% complete, and still awaiting above file to complete the RAPTOR clustering
RAPTOR for bottom half can already run, and do not need to wait for the file above
image 1
image 2
Describe implementation you've considered
No response
Documentation, adoption, use case
No response
Additional information
No response
The procedure of calling LLM is parallel already. The parallelity for processing file depends on number of process of the task_executor.
@NicksonYap Thanks a lot for your suggestion — and sorry for the delayed response! ⏳
Good news! Our product does support concurrent RAPTOP processing 🚀. Please double-check your hardware configuration to ensure everything is set up correctly 🖥️🔧.
If this resolves your issue, feel free to close the feature request. Otherwise, we’ll help clean things up in our next sweep 🧹. As always, we’d love to hear more constructive suggestions — thanks again for contributing! 🙌