langchain
langchain copied to clipboard
Progress bar for LLMChain
Hello, Is there a way to track progress when giving a list of inputs to a LLMChain object using tqdm for example? I didn't see any parameter that would allow me to use tqdm. I also checked if I could write a Callback for this. But the hooks doesn't seem to allow for that. Anyone managed to use some progress bar?
Can you provide a code snippet explaining the current behavior and the behavior you want?
Sorry for the lack of example. We can take the example in the documentation: here
Current Behavior
input_list = [
{"product": "socks"},
{"product": "computer"},
{"product": "shoes"}
]
llm_chain.apply(input_list)
Output:
[{'text': '\n\nSocktastic!'},
{'text': '\n\nTechCore Solutions.'},
{'text': '\n\nFootwear Factory.'}]
Behavior I would like
input_list = [
{"product": "socks"},
{"product": "computer"},
{"product": "shoes"}
]
llm_chain.apply(input_list, show_progress=True)
Output:
33%|█████ | 1/3 [00:02<00:09, Xit/s] (inference w batch size X)
[{'text': '\n\nSocktastic!'},
{'text': '\n\nTechCore Solutions.'},
{'text': '\n\nFootwear Factory.'}]
Basically I'm just looking for an easy way to track progression for long input list. Thanks for the help
Hi, @louisoutin! I'm Dosu, and I'm here to help the LangChain team manage their backlog. I wanted to let you know that we are marking this issue as stale.
From what I understand, you are requesting the addition of a progress bar to the LLMChain object in order to track progress when giving a list of inputs. You have provided a code snippet explaining the current behavior and the behavior you would like.
Before we proceed, we would like to confirm if this issue is still relevant to the latest version of the LangChain repository. If it is, please let us know by commenting on this issue. Otherwise, feel free to close the issue yourself or it will be automatically closed in 7 days.
Thank you for your understanding and contribution to the LangChain project!
Any progress on this, or anybody having a nice hack? For MapReduce like chains on large amount of doc chunks, this would totally make sense!
You can use callback to perform this.
First, define your callback
from typing import Any, Dict
from uuid import UUID
from tqdm.auto import tqdm
from langchain_core.callbacks import BaseCallbackHandler
class BatchCallback(BaseCallbackHandler):
def __init__(self, total: int):
super().__init__()
self.count = 0
self.progress_bar = tqdm(total=total) # define a progress bar
# Override on_llm_end method. This is called after every response from LLM
def on_llm_end(self, response: LLMResult, *, run_id: UUID, parent_run_id: UUID | None = None, **kwargs: Any) -> Any:
self.count += 1
self.progress_bar.update(1)
Then, initialize an instance of callback and run batch with it
# Assume your chain is `chain`, inputs is `inputs`
cb = BatchCallback(len(inputs)) # init callback
chain.batch(inputs, config={"callbacks": [cb]})
cb.progress_bar.close()