arnavsinghvi11

Results 149 comments of arnavsinghvi11

Hi @anushka192001 , please refrain from pasting code that has harmful content going forward. I have corrected this on various raised issues by you. the issue is still unclear as...

Hi @oekekezie , the current pypi version for dspy is not updated with latest PR changes. Please install from source as needed while we work through updating dspy-ai!

Hi @alcinos , thanks for raising this! feel free to push a PR updating the typing inconsistencies and adding documentation. The `OptimizerResult` is akin to the return values in [`dspy.Evaluate`](https://github.com/stanfordnlp/dspy/blob/d09d984ecaf17f7262294d50fe46fd8105fbf291/dspy/evaluate/evaluate.py#L215),...

Hi @jsleight , thanks for raising this. Currently, the behavior lies in declaring your Dataset type first and then setting the inputs - example from [intro.ipynb](https://colab.research.google.com/github/stanfordnlp/dsp/blob/main/intro.ipynb#scrollTo=Kp9NHVagvIuD): ``` from dspy.datasets import...

Hi @rysloan4 , there was a similar change in a [past PR](https://github.com/stanfordnlp/dspy/pull/486/files) to correct this, but this is a bit hacky and would require this change for every support RM....

Hi @Su3h7aM , #744 is related but not mergeable yet. >the responses are only generated by the last declared model. this is bit unclear. Are the generations only from phi3,...

Hi @AmoghM , I believe this is related to a previously-surfaced [issue](https://github.com/stanfordnlp/dspy/issues/749) with [ollama only printing the first completion, regardless of the specified `n`](https://github.com/stanfordnlp/dspy/issues/749#issuecomment-2032789021). It seems like ollama would need...

Thanks @mikeedjones for the PR! This makes sense, but just curious from #734 , whether setting a large number of tokens likely solves this issue? I feel like it's actually...

I'm not sure I understand. If the generation limit is restricted, does setting `max_tokens = 4096` not capture what's done here? If the long signatures exceed the very high token-generation...

Thanks @mikeedjones , this is really helpful! I do see the issue now lies more in the response parsing which triggers the fallback completion logic. With your code above and...