langchain icon indicating copy to clipboard operation
langchain copied to clipboard

Issue: How to print the complete prompt that chain used

Open Riveray-Jiang opened this issue 1 year ago • 11 comments

Issue you'd like to raise.

qa = ConversationalRetrievalChain.from_llm(AzureChatOpenAI(deployment_name="gpt-35-turbo"), db.as_retriever(), memory=memory) print(qa.combine_docs_chain.llm_chain.prompt)

ChatPromptTemplate(input_variables=['question', 'context'], output_parser=None, partial_variables={}, messages=[SystemMessagePromptTemplate(prompt=PromptTemplate(input_variables=['context'], output_parser=None, partial_variables={}, template="Use the following pieces of context to answer the users question. \nIf you don't know the answer, just say that you don't know, don't try to make up an answer.\n----------------\n{context}", template_format='f-string', validate_template=True), additional_kwargs={}), HumanMessagePromptTemplate(prompt=PromptTemplate(input_variables=['question'], output_parser=None, partial_variables={}, template='{question}', template_format='f-string', validate_template=True), additional_kwargs={})])

How can I get the complete prompt includes questions and context?

Suggestion:

No response

Riveray-Jiang avatar Jun 23 '23 04:06 Riveray-Jiang

What if I want to pass/edit more context not just similarity search context? How can I achieve this?

Riveray-Jiang avatar Jun 23 '23 04:06 Riveray-Jiang

Just set langchain.versobe to True.

import langchain
langchain.verbose = True
call_function()

This should print the prompt before submitting.

Avinash-Raj avatar Jun 23 '23 15:06 Avinash-Raj

Just set langchain.versobe to True.

import langchain
langchain.verbose = True
call_function()

This should print the prompt before submitting.

this does not work for me.

I asked on Reddit and got a solution https://www.reddit.com/r/LangChain/comments/1643z8k/is_there_a_way_to_print_out_the_full_prompt_that

harrywang avatar Aug 29 '23 00:08 harrywang

worked for me too.

Abhijeetiyengar avatar Sep 05 '23 05:09 Abhijeetiyengar

Just set langchain.versobe to True.

import langchain
langchain.verbose = True
call_function()

This should print the prompt before submitting.

this does not work for me.

I asked on Reddit and got a solution https://www.reddit.com/r/LangChain/comments/1643z8k/is_there_a_way_to_print_out_the_full_prompt_that

One of the solutions described in the Reddit post, which worked for me and OP, was using

langchain.debug = True

This will cause LangChain to give detailed output for all the operations in the chain/agent, but that output will include the prompt sent to the LLM.

niagr avatar Oct 14 '23 07:10 niagr

Is verbose logging or just print? I want to add logging info to that.

npuichigo avatar Nov 16 '23 13:11 npuichigo

Setting "verbose" output is not a real solution, IMO. In my case, 80% of the prints are useless; I just want to get the final prompt sent to the LLM while using LCEL, and I see no easy way to do this unless I change my approach to something else.

pprobst avatar Jan 23 '24 18:01 pprobst

"Setting "verbose" output is not a real solution, IMO. In my case, 80% of the prints are useless; I just want to get the final prompt sent to the LLM while using LCEL, and I see no easy way to do this unless I change my approach to something else." I agree with this. I need to be able to capture the full prompt that was sent to llm and store it in db. If I find the solution I will report back.

Tachyon5 avatar Jan 23 '24 19:01 Tachyon5

In my case, I just need to log it with my custom logging system, so after some digging I "solved" it with a callback like this:

class CustomHandler(BaseCallbackHandler):
    def on_llm_start(
        self, serialized: Dict[str, Any], prompts: List[str], **kwargs: Any
    ) -> Any:
        formatted_prompts = "\n".join(prompts)
        _log.info(f"Prompt:\n{formatted_prompts}")
...
output = chain.invoke({"info": input_text}, config={"callbacks": [CustomHandler()]})

pprobst avatar Jan 23 '24 19:01 pprobst

that's exactly the approach I was about to take. Thanks!!

Tachyon5 avatar Jan 23 '24 19:01 Tachyon5

Is it a duplicate of https://github.com/langchain-ai/langchain/issues/912 ?

cryoff avatar Jan 25 '24 17:01 cryoff

If anyone is looking for a simple string output of a single prompt, you can use the .format() method of ChatPromptTemplate, but should work with any BaseChatPromptTemplate class.

I struggled to find this as well. In my case I wanted the final formatted prompt string being used inside of the API call.

Example usage:

# Define a partial variable for the chatbot to use
my_partial_variable = """APPLE SAUCE"""

# Initialize your chat template with partial variables
prompt_messages = [
    # System message
    SystemMessage(content=("""You are a hungry, hungry bot""")),
    # Instructions for the chatbot to set context and actions
    HumanMessagePromptTemplate(
        prompt=PromptTemplate(
            template="""Your life goal is to search for some {conversation_topic}. If you encounter food in the conversation below, please eat it:\n###\n{conversation}\n###\nHere is the food: {my_partial_variable}""",
            input_variables=["conversation_topic", "conversation"],
            partial_variables={"my_partial_variable": my_partial_variable},
        )
    ),
    # Placeholder for additional agent notes
    MessagesPlaceholder("agent_scratchpad"),
]

prompt = ChatPromptTemplate(messages=prompt_messages)
prompt_as_string = prompt.format(
    conversation_topic="Delicious food",
    conversation="Nothing about food to see here",
    agent_scratchpad=[],
)
print(prompt_as_string)
System: You are a hungry, hungry bot
Human: Your life goal is to search for some Delicious food. If you encounter food in the conversation below, please eat it:
###
Nothing about food to see here
###
Here is the food: APPLE SAUCE

nathanjones4323 avatar Feb 09 '24 05:02 nathanjones4323