extract memory llm function call to separate methods
Description
Different LLMs' function call results are different for the same prompt. So ideally different LLM function call should use different prompts. Different functionalities (extract entities, facts, etc.) should also use different LLMs because some LLMs are good at handling graph while some other LLMs are good at extracting facts. In order to support this, we need an LLM implementation abstraction which provides seperate atomic capabilities (extract entites, nodes, facts, etc.) and there own prompts.
This is MR is a prepare MR for the above code refactor. This MR doesn't change any logic, just extract some atomic functionalities to separate _llm_atomic_fncalls. The final expectation is, we call llm.atomic_fncall inside memory and graph memory.
- [x] Refactor (does not change functionality, e.g. code style improvements, linting)
How Has This Been Tested?
- [x] Unit Test
Checklist:
- [ ] My code follows the style guidelines of this project
- [ ] I have performed a self-review of my own code
- [ ] I have commented my code, particularly in hard-to-understand areas
- [ ] I have made corresponding changes to the documentation
- [ ] My changes generate no new warnings
- [ ] I have added tests that prove my fix is effective or that my feature works
- [ ] New and existing unit tests pass locally with my changes
- [ ] Any dependent changes have been merged and published in downstream modules
- [ ] I have checked my code and corrected any misspellings
Maintainer Checklist
- [ ] closes #xxxx (Replace xxxx with the GitHub issue number)
- [ ] Made sure Checks passed
@GingerMoon Can you please provide the script using which we can test the functionality. cc @spike-spiegel-21
@GingerMoon Can you please resolve the merge conflicts?
Closing as stale. Feel free reopen after the requested changes.