Chen Qian

Results 189 comments of Chen Qian

Thanks for the reply! The code is not representative because we can always have signatures that break DSPy adapter, which breaks the response parsing process. The discrepancy between streaming and...

@Akshay1-6180 Thanks for reporting the issue! I think you are looking for tracing support: https://dspy.ai/tutorials/observability/#tracing. Let us know if this helps with your use case!

> before sending it to the llm even for tracing This is also supported by MLflow tracing, which covers not only the LLM request/response.

You can also subclass from `dspy.BaseLM` to make your own LM that under the hood uses `google-genai`: https://github.com/stanfordnlp/dspy/blob/6ee8cdca4bf3283a8bde6b92c628cbbd0851fe9b/dspy/clients/base_lm.py#L28

@GangGreenTemperTatum Thanks for reporting the issue! We have migrated to a new site, please see optimizer information here: https://dspy.ai/learn/optimization/overview/

This is a valid proposal to me. We don't have anything planned yet for system metrics, @daniellok-db maybe it worth bringing up this in the standup? As long as this...

@FireMasterK Thanks for opening the issue report! We actually realized that reasoning model sometimes don't follow the structural requirement, like newline or so. To use a reasoning model, please use...

@IliaMManolov Have you tried JSONAdapter or XMLAdapter? A relaxed handling is risky because it means we are adding arbitrary assumption in order to reconstruct the desired format.

@andrewfr Thanks for reporting the issue! You can quickly check if this is a DSPy issue, prompt issue or LM issue by getting the history: ``` dspy.inspect_history(n=5) ``` By running...

> In any case, since I presume most of us use DSPy with remote LMs, async should really be the de-facto way of using the library. Async matters when users...