Dhar Rawal
Dhar Rawal
This is possible right now with one of the fixes in PR https://github.com/stanfordnlp/dspy/pull/843 Below is an example of pseudocode for doing this: def my_optimize_fn: # load previously optimized program #...
@arnavsinghvi11 @aazizisoufiane Both the files in this PR are now obsolete. Based on a review of the backoff handler, the changes belong in the aws_providers.py file. Specifically, the call_model functions...
> It seems like an edit to the super().**init** call in the Bedrock class still needs to be changed - batch_n should be False `batch_n=False` @mikeusru - I'm not so...
I am running into the same error after upgrading to 2.3.4. `("augmented" not in demo or not demo.augmented)` probably needs to be changed to `(not demo.get('augmented', False))`
I have a PR #772 that tackles this issue. Almost done...
A RAG example using Pinecone for retrieval would be helpful. Or... are there reasons not to use Pinecone here?
@okhat, ty! I tested below code and it works. I will submit a pull request if this looks reasonable ``` """ Retriever model for Pinecone """ import pinecone # type:...
@okhat I have submitted the [pull request](https://github.com/stanfordnlp/dspy/pull/107), fyi
I switched to using gpt-3.5-turbo-16k to get around this problem, but its a paid/closed model. Perhaps someone here can suggest an equivalent open source/free model
@sreenivasmrpivot you can increase max_tokens as follows: ` llm = dspy.OpenAI(model='gpt-3.5-turbo-16k', max_tokens=8000)` Off the top, could you generate one API-Inst pair at a time and pass the "instruction"'s of the...