[Feature] Get actual formatted prompt
What feature would you like to see?
How could we access the actual and formatted prompt used by DSPy ?
I am using dspy python package with 2.6.15 version.
For-example, with the following :
class AnswerToQuestion(dspy.Signature):
"""
Answer the question.
"""
question: str = dspy.InputField(desc="Question")
answer: str = dspy.OutputField(desc="Answer")
How could I get that the actual and formated prompt used by DSPy in the Predict (dspy.Predict(AnswerToQuestion)) is something like :
{'role': 'system', 'content': 'Your input fields are:\n1. question(str): Question\n\nYour output fields are:\n1.answer (str): Answer\n\nAll interactions will be structured in the following way, with the appropriate values filled in.\n\nInputs will have the following structure:\n\n[[ ## question ## ]]\n{question}\n\nOutputs will be a JSON object with the following fields.\n\n{\n "answer": "{answer}"\n}\n\nIn adhering to this structure, your objective is: \n Answer the question.'}
And this is incomplete since there is no the {'role': 'system', 'content': 'My actual question ?' with my actual question.
I need it since I want to be able to use the generated prompt externally(so not by the DSPy Predict function alone), for example for benchmarking purpose.
Is this already implemented ? Or does this need to be added ?
Thanks !
Would you like to contribute?
- [ ] Yes, I'd like to help implement this.
- [ ] No, I just want to request it.
Additional Context
No response
@JBExcoffier Have you tried using ChatAdapter.format() for this?
To get only the supposed system prompt I am using dspy.adapters.chat_adapter.prepare_instructions(signature=AnswerToQuestion).
But it only gives the following string :
Your input fields are:
1. `question` (str): Question
Your output fields are:
1. `answer` (str): Answer
All interactions will be structured in the following way, with the appropriate values filled in.
[[ ## question ## ]]
{question}
[[ ## answer ## ]]
{answer}
[[ ## completed ## ]]
In adhering to this structure, your objective is:
Answer the question.
For now I cannot access the actual messages that are sent to the language model, so encapsulated in a list with dict such as {'role':'system', 'content':above}.
Indeed dspy.ChatAdapter.format should give the above prompt well encapsulated in a OpenAI like list-dict format, but I cannot make it really work.
I don't want to pass few_shot examples so demos is an empty list. And with the following call :
dspy.ChatAdapter.format(
signature=AnswerToQuestion, demos=[], inputs=[dict(question="Capital of the UK ?")]
)
It raises the following error : TypeError: ChatAdapter.format() missing 1 required positional argument: 'instance'
What's the correct way to pass the inputs please ? I already have the main part of the prompt with dspy.adapters.chat_adapter.prepare_instructions function (and it's no complete, e.g. the JSON part is not present), but if possible, I would like to have the full messages (including roles) as a template (without passing an actual input).
Thanks !
I'll let an author reply but I think this is what you want? It includes the full messages including the roles as an OpenAI-like list-dict. demos is []. It doesn't pass in an actual input.
import dspy
chat_adapter = dspy.ChatAdapter()
sig = dspy.ensure_signature('question -> answer')
messages = chat_adapter.format(sig, [], {})
print(messages)
[{'role': 'system', 'content': 'Your input fields are:\n1. `question` (str)\nYour output fields are:\n1. `answer` (str)\nAll interactions will be structured in the following way, with the appropriate values filled in.\n\n[[ ## question ## ]]\n{question}\n\n[[ ## answer ## ]]\n{answer}\n\n[[ ## completed ## ]]\nIn adhering to this structure, your objective is: \n Given the fields `question`, produce the fields `answer`.'}, {'role': 'user', 'content': 'Respond with the corresponding output fields, starting with the field `[[ ## answer ## ]]`, and then ending with the marker for `[[ ## completed ## ]]`.'}]
Looking at your code, you may be confused about the difference between a ChatAdapter instance (which I use) and a ChatAdapter class (which you use). In case that help clarify why you got an error.
Indeed, it is better when I use the class.
With the following :
dspy.ChatAdapter().format(
signature=AnswerToQuestion, demos=[], inputs=dict(question="[MY QUESTION]")
)
I get the following answer :
[{'role': 'system',
'content': 'Your input fields are:\n1. `question` (str): Question\n\nYour output fields are:\n1. `answer` (str): Answer\n\nAll interactions will be structured in the following way, with the appropriate values filled in.\n\n[[ ## question ## ]]\n{question}\n\n[[ ## answer ## ]]\n{answer}\n\n[[ ## completed ## ]]\n\nIn adhering to this structure, your objective is: \n Answer the question.'},
{'role': 'user',
'content': '[[ ## question ## ]]\n[MY QUESTION]\n\nRespond with the corresponding output fields, starting with the field `[[ ## answer ## ]]`, and then ending with the marker for `[[ ## completed ## ]]`.'}]
So it's fine ! But still not an actual template since I passed an input. Without the input (empty dict) I have an error, which is the same error I get when running your code : ValueError: Expected dict_keys(['question']) but got dict_keys([]).
Is am using DSPy 2.6.15 so maybe it's my version. Is it possible to get a full template answer (empty input) ? Thanks
@JBExcoffier Thanks for reporting the issue!
The formatted prompt is actually a multi-turn message, to get that you can use dspy.inspect_history(), or dspy.settings.lm.history[-1]["messages"] to fetch it in your program.
If you want to see a more detailed breakdown of what's happening behind the scene, please try out MLflow tracing which visualizes every step: https://dspy.ai/tutorials/observability/