dspy icon indicating copy to clipboard operation
dspy copied to clipboard

DSPy: The framework for programming—not prompting—language models

Results 691 dspy issues
Sort by recently updated
recently updated
newest added

When it comes to fully production grade inference servers, TIS is very much optimized and open sourced. So an integration of this in dspy along with trt llm (#1094) would...

Added support for spBLEU (SentencePiece BLEU) and cosine similarity as new evaluation metrics in the dspy module to enhance text similarity and performance analysis.

Add support for Vision data for various LLM vendor (Gemini, GPT, Azure OpenAI GPT). This implements feature requested in https://github.com/stanfordnlp/dspy/issues/624 This adds the `is_image` property to `InputField`. We expect this...

Hi DSPy developers, First of all, thanks a lot for this great work! Recently I've been trying to integrate DSPy into my work, but I stumbled upon the chat history...

I think one of the great improvements that DSPy has made was going from the term "Teleprompter" to "Optimizer." That change made what DSPy was doing much clearer. That terminology...

Behavior 2.5

Sub-modules might need different compile settings. For example, suppose we have a sub-module that summarizes a list of contexts, we may want to set `max_bootstrapped_demos` and `max_labeled_demos` of the main...

Behavior 2.5

I'm trying to use llama 3.1 70B to do "multi-needle in a haystack" search. Basically, I'm asking the model to use a text and search through a list of terms;...

Behavior 2.5

How to make this reply only with the answer? ![image](https://github.com/user-attachments/assets/fe40e8c7-dba5-4184-b4ef-df75278c975a)

Langtrace added support for native DSPy projects with support for tracing and experimentation. ### Instructions https://docs.langtrace.ai/supported-integrations/llm-frameworks/dspy#dspy ### Inference Metrics ![image](https://github.com/user-attachments/assets/f17c861c-5056-4445-b956-3bbb225ed094) ### Evaluation Scores ![image](https://github.com/user-attachments/assets/95b6269e-b149-4d11-a00c-8a90c53cb788)

If running an evaluation but the model messes up the output formatting, litellm will still cache it because the response was valid, just not correct schema. If this happens enough...