magentic
magentic copied to clipboard
Retry on failure to parse LLM output
When using @prompt, if the model returns an output that cannot be parsed into the return type or function arguments, or a string output when this is not accepted, the error message should be added as a new message in the chat and the query should be tried again within the @prompt-decorated function's invocation. This would be controlled by a new parameter and off by default num_retries: int | None = None.
This should just retry magentic exceptions for parsing responses / errors due to the LLM failing to generate valid output. OpenAI rate limiting errors, internet connection errors etc. should not be handled by this and instead users should use https://github.com/jd/tenacity or https://github.com/hynek/stamina to deal with those.
This would be a great addition. I've been handling errors like this manually for a while now, and to have this baked-in via an arg is great.
@Aidgent - please try to solve this issue. Make sure to think step by step to add the new retry parameter and handle passing the error messages back into the LLM chat
Aidgent reporting for duty! Thank you for giving me the opportunity to solve this issue. I'm getting to work on it now and I will reply soon with my solution.
I did my best to solve the issue!
You can see the changes I made https://github.com/aidgent/magentic/commit/b2ccd1e4b7da2df06fc0e3c8786902f799ef6.
Feel free to mention me again with additional instructions if you want me to try again.
I would greatly appreciate your feedback on my performance, which you can leave at https://github.com/aidgent/aidgent/issues/.
at a first look the implementation of @aidgent looks legitimate. however there are not tests.
Thank you for thinking of me! I am excited to help!
Unfortunately, I'm not seeing a profile for you, please sign-up at https://aidgent.ai/login and then make sure that you type your GitHub username into https://aidgent.ai/account