arnavsinghvi11
arnavsinghvi11
Thanks for checking @mkotlarz . Could you run `ruff check . --fix-only` and push again? Should be good to merge after that!
Thanks @jasujaayush !
Thanks @erst-neaste, please do contribute PRs! Could you also share snippets of related code and the full error stack trace? This seems to be an error in configuring your LM...
Hi @XiaoConstantine , thanks for the PR! Can you run `ruff check . --fix-only` to fix linting? Ready to merge after that.
Thanks @XiaoConstantine !
Hi @DSLituiev , which LM is this for? Supporting backends for expected generations/parsing for chat models is WIP. You can overcome some of this through proper stopping conditions or external...
Hi @JPonsa , this is partly because [chat models are a bit iffy at the moment in DSPy](https://github.com/stanfordnlp/dspy/issues/662) and some models tend to hallucinate on DSPy's formatting. This can be...
Hi @usamajamil43, thanks for the contribution! left a few comments in the PR. Noticed some todos so feel free to mark the PR as ready to merge once the changes...
Thanks @kylerush ! Currently, there is a check for if the LM is configured [predict](https://github.com/stanfordnlp/dspy/blob/696f2d2e1f96f173abb36302c71a725d564dfadb/dsp/primitives/predict.py#L59) which ensures the validation before any model generations. But it's actually not getting hit since...
Thanks @kylerush ! This would be great to add through a PR. The current behavior was designed to limit excessive outputted error messages, partly handled through [how many errors can...