langchain
langchain copied to clipboard
Issue: How to print the complete prompt that chain used
Issue you'd like to raise.
qa = ConversationalRetrievalChain.from_llm(AzureChatOpenAI(deployment_name="gpt-35-turbo"), db.as_retriever(), memory=memory) print(qa.combine_docs_chain.llm_chain.prompt)
ChatPromptTemplate(input_variables=['question', 'context'], output_parser=None, partial_variables={}, messages=[SystemMessagePromptTemplate(prompt=PromptTemplate(input_variables=['context'], output_parser=None, partial_variables={}, template="Use the following pieces of context to answer the users question. \nIf you don't know the answer, just say that you don't know, don't try to make up an answer.\n----------------\n{context}", template_format='f-string', validate_template=True), additional_kwargs={}), HumanMessagePromptTemplate(prompt=PromptTemplate(input_variables=['question'], output_parser=None, partial_variables={}, template='{question}', template_format='f-string', validate_template=True), additional_kwargs={})])
How can I get the complete prompt includes questions and context?
Suggestion:
No response
What if I want to pass/edit more context not just similarity search context? How can I achieve this?
Just set langchain.versobe to True.
import langchain
langchain.verbose = True
call_function()
This should print the prompt before submitting.
Just set langchain.versobe to True.
import langchain langchain.verbose = True call_function()
This should print the prompt before submitting.
this does not work for me.
I asked on Reddit and got a solution https://www.reddit.com/r/LangChain/comments/1643z8k/is_there_a_way_to_print_out_the_full_prompt_that
worked for me too.
Just set langchain.versobe to True.
import langchain langchain.verbose = True call_function()
This should print the prompt before submitting.
this does not work for me.
I asked on Reddit and got a solution https://www.reddit.com/r/LangChain/comments/1643z8k/is_there_a_way_to_print_out_the_full_prompt_that
One of the solutions described in the Reddit post, which worked for me and OP, was using
langchain.debug = True
This will cause LangChain to give detailed output for all the operations in the chain/agent, but that output will include the prompt sent to the LLM.
Is verbose logging or just print? I want to add logging info to that.
Setting "verbose" output is not a real solution, IMO. In my case, 80% of the prints are useless; I just want to get the final prompt sent to the LLM while using LCEL, and I see no easy way to do this unless I change my approach to something else.
"Setting "verbose" output is not a real solution, IMO. In my case, 80% of the prints are useless; I just want to get the final prompt sent to the LLM while using LCEL, and I see no easy way to do this unless I change my approach to something else." I agree with this. I need to be able to capture the full prompt that was sent to llm and store it in db. If I find the solution I will report back.
In my case, I just need to log it with my custom logging system, so after some digging I "solved" it with a callback like this:
class CustomHandler(BaseCallbackHandler):
def on_llm_start(
self, serialized: Dict[str, Any], prompts: List[str], **kwargs: Any
) -> Any:
formatted_prompts = "\n".join(prompts)
_log.info(f"Prompt:\n{formatted_prompts}")
...
output = chain.invoke({"info": input_text}, config={"callbacks": [CustomHandler()]})
that's exactly the approach I was about to take. Thanks!!
Is it a duplicate of https://github.com/langchain-ai/langchain/issues/912 ?
If anyone is looking for a simple string output of a single prompt, you can use the .format()
method of ChatPromptTemplate
, but should work with any BaseChatPromptTemplate
class.
I struggled to find this as well. In my case I wanted the final formatted prompt string being used inside of the API call.
Example usage:
# Define a partial variable for the chatbot to use
my_partial_variable = """APPLE SAUCE"""
# Initialize your chat template with partial variables
prompt_messages = [
# System message
SystemMessage(content=("""You are a hungry, hungry bot""")),
# Instructions for the chatbot to set context and actions
HumanMessagePromptTemplate(
prompt=PromptTemplate(
template="""Your life goal is to search for some {conversation_topic}. If you encounter food in the conversation below, please eat it:\n###\n{conversation}\n###\nHere is the food: {my_partial_variable}""",
input_variables=["conversation_topic", "conversation"],
partial_variables={"my_partial_variable": my_partial_variable},
)
),
# Placeholder for additional agent notes
MessagesPlaceholder("agent_scratchpad"),
]
prompt = ChatPromptTemplate(messages=prompt_messages)
prompt_as_string = prompt.format(
conversation_topic="Delicious food",
conversation="Nothing about food to see here",
agent_scratchpad=[],
)
print(prompt_as_string)
System: You are a hungry, hungry bot
Human: Your life goal is to search for some Delicious food. If you encounter food in the conversation below, please eat it:
###
Nothing about food to see here
###
Here is the food: APPLE SAUCE