MetaGPT
MetaGPT copied to clipboard
feat: add caching and benchmark
Hey, I opened a PR which allows you to quickly iterate on your app locally.
Adding the init statement will automatically use a local redis cache for any of your LLM requests (more here). With that you won't need to wait for the slow & expensive API requests if you make any changes to your code/prompts and want to make sure everything works as expected.
This will also enable you to test MetaGPT across many inputs at the same time via:
parea benchmark --func startup:startup --csv_path benchmark-inputs.csv
The benchmark will create a CSV file with all the traces for you to debug.
I ran the benchmark with this CSV file:
idea
"Write a chess game in cli"
"Write a cli snake game"
CC: @garylin2099 @stellaHSR is that helpful for you guys?
I think idea is good but you don't really need anything else except redis or even more generic cache set/get interface. It should be exposed and available to either roles or actions where you could set any kind of values you want on demand
It should be exposed and available to either roles or actions where you could set any kind of values you want on demand
So, you would want to cache/log any actions taken by a certain role? Is that to speed up writing code or as a form of memory?
Please resolve all conflicts and Review comments
Frankly, I think this implementation is probably better than the current rsp_cache. It doesn't pollute our commit history, and it's only minimal changes (it seems)
@geekan should I resolve all conflicts? Note, the current implementation only helps with caching OpenAI calls. Is that sufficient?
Hi, thanks for the proposal! However, to enable caching, the solution requires a user-maintained redis instance, which is a bit heavy. A simpler mechanism might be better.
Oh we can simply read-write from a file for caching. Is the idea to have the cache always on?
Ideally there should be a global switch which you can use to enable or disable caching
Okay! Should that be done via OS environment variables or another CLI argument? BTW, the current implementation only supports caching for (Azure) OpenAI calls, does that suffice?
@joschkabraun That really isn’t enough. Our existing cache takes over almost all network requests. Maybe it would be better at http layer