dspy icon indicating copy to clipboard operation
dspy copied to clipboard

DSPy: The framework for programming—not prompting—language models

Results 691 dspy issues
Sort by recently updated
recently updated
newest added

Added tests for extend generation logic to avoid any further breaking changes (#920) which must then be reverted (#1169) I'm not sure the behaviour currently exhbitied is intended? I have...

**Script** ``` from typing import List, Literal, Dict, Any, Optional import dspy from datasets import load_dataset from pydantic import BaseModel, Field llm = dspy.LM(model="databricks/databricks-meta-llama-3-1-70b-instruct") dspy.settings.configure(lm=llm) # Load the CoNLL 2003...

Fixes #728 Currently the use of OpenAIVectorizer fails with recent OpenAI API version(s): Versions: `openai==1.13.3` `dspy-ai==2.4.0` (same relevant code on main) Error: ``` vectorizer = OpenAIVectorizer() vectorizer("Hello world!") ... cur_batch_embeddings...

Just cleaning up home directory. I need to adjust the first block with relative imports

hello and thank you for the lovely tool I've been struggling to get it to work with different LLM providers like Anthropic or some unorthodox LLMs that need to use...

Found this necessary for guiding output length when Signature descriptions weren't impactful. Would be interesting to see system prompt integration in optimizers.

TODO - [x] Test - [x] DSPy style cache - TODO: handle non 400 return code

While experimenting with GPT4/Gemini, noticed that sometimes the completion will contain the input. In that case, the previous `template.extract` will fail to extract (see `test_single_output_with_noise` test). This provides a fix...

A warning about using the `AzureOpenAI` is displayed in version `2.5.2`. The recommendation is to use `dspy.LM`. I get the serialize exception when I change this the client from this:...

We can infer that, history in dspy acts as a trace, where we can see previous LLM response along with all the additional arguments. However, with more and more llm...