[Feature] Retries on Pydantic Validation Errors
What feature would you like to see?
Maybe I'm wrong but I thought I recalled the old version of DSPY (pre-litellm / pre-adapter era) doing retries for pydantic validation failures, but I'm not seeing any retrying in the codepaths I'm looking at in the stack trace (predict.py, json_adapter.py) Is there a way built into the new version of DSPY to control retries? Or do I need to impl it into my own modules now?
E.g.:
pydantic_core._pydantic_core.ValidationError: 1 validation error for SomeSignature
some_list
List should have at most 3 items after validation, not 4 [type=too_long, input_value=[1,2,3,4], input_type=list]
For further information visit https://errors.pydantic.dev/2.9/v/too_long
I would expect this to feed back into the LLM to let it correct its outputs.
Would you like to contribute?
- [ ] Yes, I'd like to help implement this.
- [ ] No, I just want to request it.
Additional Context
No response
I agree.. So, here's what I see: When they switched to using LiteLLM: there's this: https://github.com/stanfordnlp/dspy/blob/84b9b88a444ee53995dd6b8ffdf6321ab5e7fcc7/dspy/clients/lm.py#L65
Which seems to do something different from what I remember dspy.TypedPredictor doing..
Once upon a time: at least at commit hash b32b2abbad12836b3fd1823069b5b81aa9244aeb:
When it would get a TypeError: it would
I agree that this was useful behaviour...
In looking into it tho, I found dspy.predict.retry.py... Which is totally commented out. I was all set to work on adding back the retrying in, till I saw that.. Can someone explain what the plan for retrying Pydantic validation failures?
Below is a message that can be useful.
While we prepare a short tutorial on this: Folks looking to migrate from 2.5-style Assertions, you can now use
dspy.BestOfNordspy.Refinewhich replace the assertions functionality with streamlined modules instead.module = dspy.ChainOfThought(...) # or a complex multi-step dspy.Module module = dspy.BestOfN(module, N=5, reward_fn=reward_fn, threshold=1.0) module(...) # at most 5 retries, picking best reward, but stopping if `threshold` is reachedwhere, reward functions can return scalar values like float or bool, e.g.
def reward_fn(input_kwargs, prediction): return len(prediction.field1) == len(prediction.field1)
Maybe I'm wrong but I thought I recalled the old version of DSPY (pre-litellm / pre-adapter era) doing retries for pydantic validation failures, but I'm not seeing any retrying in the codepaths I'm looking at in the stack trace (predict.py, json_adapter.py)
Exactly, the typed dspy used to be built on retry, but it was changed to litellm/adapter. Many of these frameworks use constrained generation techniques, which don't support general Pydantic validators. Personally, I think this is a shame, since the types are a great place to define invariants on your datatypes, rather than spreading them around your code.
Thank you for the PR!
Great work on the PR and really loving this framework.
Quick question, I didn't see support for async calls in dspy.Refine and dspy.BestOfN. Are there plans to implement this?
Are there plans to implement support for async calls in dspy.Refine and dspy.BestOfN? For example how can u use refine with module that uses MCP tools?