Vaibhav Balloli
Vaibhav Balloli
@ekzhu Yes, happy to add this to the docs. The potential concern is any functionality with token_limits would still result in an error - that is why I was hoping...
I'm currently working on the community packagers for external LLMs through the OpenAI client @nullnuller - I'll keep the conversation updated
I'll be maintaining it here: https://github.com/vballoli/autogen-openaiext-client. Apologies for not finishing it sooner, I'll get to it after thanksgiving with the proposed timeline in the README. @nullnuller
I've made updates and tested with magentic-one. There seem to be issues with tool calling in Gemini, but apart from that everything seems to be working fine. I will follow...
Happy to help! Looking forward to contributing more to autogen
Yep this looks good - I'll follow up on creating this over the weekend
Ah, I see; my bad. I simply shifted from Predict -> CoT without considering the tokens. It's fixed now, thanks @okhat ! Alternatively, can this error be handled better? My...
@okhat should I add the warning here instead: 20 tokens is reasonable in a `dspy.Predict` setting maybe but not in CoT, so we can simply add the warning here: https://github.com/stanfordnlp/dspy/blob/b88caa3228512df3d56ba5a9320cd4476389c7ae/dspy/predict/chain_of_thought.py#L37C60-L37C66...
Yeah that makes more sense, let me know which options fits better and I'll send the PR: 1. https://github.com/stanfordnlp/dspy/blob/1bab822b366785698883802d48f4078590dee8a6/dspy/predict/predict.py#L24 Check if number of signatures are more than one, emit a...
This is the trace of the error under 20 tokens ``` /usr/local/lib/python3.10/dist-packages/dspy/predict/predict.py in v2_5_generate(lm, lm_kwargs, signature, demos, inputs, _parse_values) 262 adapter = dspy.settings.adapter or dspy.ChatAdapter() 263 --> 264 return adapter(...