langchain
langchain copied to clipboard
Langchain isn't verbose any more [run_manager gets dropped]
System Info
Python 3.11.2 Langchain 0.0.161 Debian GNU/Linux 12 (bookworm)
Who can help?
No response
Information
- [ ] The official example notebooks/scripts
- [X] My own modified scripts
Related Components
- [ ] LLMs/Chat Models
- [ ] Embedding Models
- [ ] Prompts / Prompt Templates / Prompt Selectors
- [ ] Output Parsers
- [ ] Document Loaders
- [ ] Vector Stores / Retrievers
- [ ] Memory
- [ ] Agents / Agent Executors
- [ ] Tools / Toolkits
- [X] Chains
- [X] Callbacks/Tracing
- [X] Async
Reproduction
How does verbosity work? I had my python scripts output verbose information, i mean all that green prompt-text when executing chains. All the constructors of LLMs and Chains have verbose=True in their constructors. But somewhat recently langchains behaviour changed and after doing a pip install --upgrade langchain
my verbose output completely disappeared.
Do you have some sort of changelog or documentation on this? I didn't find any.
A small script still produces verbose output. But my project doesn't. I'm not sure what i'm doing differently.
EDIT: Maybe i should add that i'm doing async calls. And feeding it my own prompts from a PromptTemplate.from_template("..."). Could be related to any of that?!
EDIT2: Where does the green text even come from? I can see the chain calls a StdoutCallbackHandler. This fires and prints "Entering new ConversationChain chain..." as it's supposed to. But how do i debug the missing green text after that? It's not in the on_chain_start()...
Expected behavior
Verbose output is produced when setting verbose=True
Please provide example code for reproduction.
Unfortunately i can't just rip it out of the project. Isn't there some documentation on how verbose is supposed to work or what to do with langchain.verbose? Or a changelog to make it easier for me to start bisecting when langchain's behaviour changed? I see some more (unresolved) questions/issues regarding verbosity and callbacks in the last few days.
Okay. The issue seems to be related to the async code. If i call chain.arun() the verbose output isn't there.
@PawelFaron here's some example code:
import asyncio
from fakellm import FakeListLLM
from langchain import LLMChain
from langchain.prompts import PromptTemplate
async def main():
responses=[
"foo",
"bar"
]
llm = FakeListLLM(responses=responses)
prompt_summary = PromptTemplate.from_template(
"""Instruction: Write a concise summary of the following:
{text}
### Response:
"""
)
chain = LLMChain(llm=llm, prompt=prompt_summary, verbose=True)
chain.run("hello sync")
await chain.arun("hello async")
asyncio.run(main())
The fake llm in langchain is also missing an _acall method. i just copy-pasted the _call to make it work:
"""Fake LLM wrapper for testing purposes."""
import asyncio
from typing import Any, List, Mapping, Optional
from langchain.callbacks.manager import CallbackManagerForLLMRun
from langchain.llms.base import LLM
class FakeListLLM(LLM):
"""Fake LLM wrapper for testing purposes."""
responses: List
i: int = 0
@property
def _llm_type(self) -> str:
"""Return type of llm."""
return "fake-list"
def _call(
self,
prompt: str,
stop: Optional[List[str]] = None,
run_manager: Optional[CallbackManagerForLLMRun] = None,
) -> str:
"""First try to lookup in queries, else return 'foo' or 'bar'."""
response = self.responses[self.i]
self.i += 1
return response
async def _acall(
self,
prompt: str,
stop: Optional[List[str]] = None,
run_manager: Optional[CallbackManagerForLLMRun] = None,
) -> str:
"""First try to lookup in queries, else return 'foo' or 'bar'."""
response = self.responses[self.i]
self.i += 1
return response
@property
def _identifying_params(self) -> Mapping[str, Any]:
return {}
The call to chain.run() produces verbose output. The call to chain.arun() doesn't.
@h3ndrik I think this is related to the fix in #4130 where run_manager
param is not passed. You can use my patch until langchain team fixes it.
@blob42 Thank you. I had a look at the LLMChain and also there, the run_manager is passed to other functions inside generate() but not inside agenerate()
Happened in PR #3256 I'm not sure if deliberate, PR merged, 45 commits, 209 files changed, "No description provided." :'-(
HI I do the following and it doesn't verbose, any way to see what happens under the hood other than using promptlayer ?
PROMPT = some custom prompt
chain_type_kwargs = {"prompt": PROMPT}
qa = RetrievalQA.from_chain_type(llm=OpenAI(openai_api_key=OPENAI_KEY, temperature=0., max_tokens=1024, verbose=True), chain_type="stuff", retriever=querybase, return_source_documents=True, verbose = True, chain_type_kwargs=chain_type_kwargs)
qa.combine_documents_chain.verbose = True
...
query = input("\nEnter a query: ")
res = qa(query)
print(res)
Hi, @h3ndrik! I'm Dosu, and I'm helping the LangChain team manage their backlog. I wanted to let you know that we are marking this issue as stale.
Based on my understanding, the issue you reported was about the verbose output not being displayed after updating to the latest version of LangChain. You provided code that showed the issue was related to async code, and there was a suggestion from blob42 to patch the issue. It was also mentioned that the issue may be related to a previous PR, and jppaolim reported a similar issue with verbose output not being displayed.
Before we close this issue, we wanted to check with you if it is still relevant to the latest version of the LangChain repository. If it is, please let us know by commenting on this issue. Otherwise, feel free to close the issue yourself, or it will be automatically closed in 7 days.
Thank you for your contribution to the LangChain repository, and please don't hesitate to reach out if you have any further questions or concerns.
This still happens in the latest version of langchain, when i try to use SelfQueryRetriever :
retriever = SelfQueryRetriever.from_llm(
llm=llm,
vectorstore=vectorstore,
document_contents=document_content_description,
metadata_field_info=metadata_field_info,
verbose=True
)
retriever.get_relevant_documents(query)
It doesn't show any verbose output
+1 on the issue with verbose=True not helping with printing the Query generated.
Although line 175 in code clearly shows it should output the query generated, I am not entirely sure how to fix it.
As I wanted to quickly see query as well as what's happening under the hood, below helped -
from langchain.callbacks.tracers import ConsoleCallbackHandler
handler = ConsoleCallbackHandler()
query = "top 5 best selling sofas for living room below 500k"
docs = self_query_retriever.get_relevant_documents(
query,
callbacks=[handler]
)
Just to print the query:
from langchain.callbacks.base import BaseCallbackHandler
from langchain.chains.query_constructor.ir import StructuredQuery
class MyCustomHandler(BaseCallbackHandler):
def on_chain_end(self, outputs, **kwargs):
"""Run when chain ends running."""
if isinstance(outputs, StructuredQuery):
print(f"Query generated from custom callback handler: \n{outputs}")
print(f"-"*25)
handler = MyCustomHandler()
query = "top 5 best selling sofas for living room below 500k"
docs = self_query_retriever.get_relevant_documents(
query,
callbacks=[handler]
)
@aasthavar You can temporarily fix it by changing the actual library code to not check for verbose=True flag, and directly show the debug statement instead. Also, check if you python logging level is set to INFO first
@HiddenMachine3 I was having the same issue with SelfQueryRetriever. Appreciate your solution