Chen Qian
Chen Qian
@JBExcoffier Thanks for reporting the issue! The formatted prompt is actually a multi-turn message, to get that you can use `dspy.inspect_history()`, or `dspy.settings.lm.history[-1]["messages"]` to fetch it in your program. If...
@mwilliamsempower Thanks for reporting the issue! ``` File ".../litellm/utils.py", line 1356, in wrapper raise e File ".../litellm/utils.py", line 1287, in wrapper executor.submit( File "/usr/lib/python3.12/concurrent/futures/thread.py", line 170, in submit raise RuntimeError('cannot...
@corpawsmanagementorg Thanks for the feature request! I took a look at Toon, which seems interesting but yet still very new. Will keep this issue open for future revisit.
@gregm711 Thanks for reporting the issue! For most cases the bottleneck is the LM I/O instead of local jobs, so I am not sure how much distributing the workload helps....
@okhat Is this temporary, or will we have a different public URL?
@jyang2-q-retail Thanks for the feature request! Latency is a very flaky metric in AI applications though. Many AI applications are bottlenecked by LM calls, which has randomized behavior, and could...
@rahulsharmavishwakarma Thanks for reporting the issue! Seems like your LM response is empty? Could you try tracing your program by following https://dspy.ai/tutorials/observability/?
@Ucag Thanks for reporting the issue! Yes locale support is something we are looking into, we cannot promise an exact ETA now, but should come no later than summer.
I am worrying that the change to prompt can lead to performance regression, since it might be possible LM does a worse job selecting multiple tools in a bulk. We...
@gnetsanet Thanks for the feature request, and sorry about the late reply! Yes, that's a great idea, and I do like it. Whenever you are ready, please share a colab...