langchain icon indicating copy to clipboard operation
langchain copied to clipboard

Request: ability to set callbacks with LCEL at instantiation

Open jamesbraza opened this issue 2 years ago • 2 comments

Feature request

With LLMChain, it was possible to instantiate with callbacks, and just pass around the LLMChain.

With LCEL, the only way to handle callbacks is to pass them to every invoke call. This requires one to pass around both the runnable LCEL object as well as the callbacks.

Motivation

It's preferable to bake the callbacks into the LCEL object in advance at instantiation, then they get called each invoke.

Your contribution

I can contribute something if I can get a confirm this is desirable. The callbacks would be inserted at the base of the LCEL:

from langchain.chat_models import ChatOpenAI
from langchain.prompts import PromptTemplate
from langchain.schema.callbacks.stdout import StdOutCallbackHandler
from langchain.schema.runnable import RunnableConfig

config = RunnableConfig(callbacks=[StdOutCallbackHandler()])
prompt = PromptTemplate.from_template(
    "What is a good name for a company that makes {product}?"
)
runnable = config | prompt | ChatOpenAI()
runnable.invoke(input={"product": "colorful socks"})

jamesbraza avatar Dec 04 '23 18:12 jamesbraza

🤖

That sounds like a great idea! Adding the ability to set callbacks at instantiation would definitely improve the usability and convenience of LCEL. Your proposed contribution looks promising, and it would be desirable to have this feature in LangChain. Thank you for considering this enhancement, and we encourage you to go ahead and create a pull request with your proposed changes. Your contribution will be much appreciated by the LangChain community!


This response is meant to be useful and save you time. It is not meant to be a precise solution, but rather a starting point for your own research.

Help me be more useful! Please leave a 👍 if this is helpful and 👎 if it is irrelevant.

If you want to continue the conversation, start your reply with @dosu-bot.

dosubot[bot] avatar Dec 04 '23 18:12 dosubot[bot]

Is this implemented or part of the roadmap somewhere so I can track it? Also, is there a way to achieve this effect currently in LECL by whatever complex way? The example doesn't seem to run at least on v0.1.3

dumbPy avatar Jan 24 '24 16:01 dumbPy

I'd also be interested in this feature.

jcmcclurg avatar Jan 30 '24 03:01 jcmcclurg

You can sort of do this already using .with_config (but it's not well documented, as you've mentioned in #14134) and doesn't give as full logging as you get from using callbacks in LLMChain (as mentioned in #14135):

config = RunnableConfig(callbacks=[StdOutCallbackHandler()])
prompt = PromptTemplate.from_template(
    "What is a good name for a company that makes {product}?"
)

# in two lines
runnable = prompt | ChatOpenAI()
runnable = runnable.with_config(config)

# OR in one line
runnable = (prompt | ChatOpenAI()).with_config(config)

runnable.invoke(input={"product": "colorful socks"})

The callbacks are then baked in to the object in advance, and get called on each invoke.

s-pike avatar Feb 07 '24 07:02 s-pike

@s-pike Hi,

If you get a moment, using your method, how could I print info from the following callback?

from langchain_community.callbacks.bedrock_anthropic_callback import BedrockAnthropicTokenUsageCallbackHandler

austinmw avatar Apr 10 '24 15:04 austinmw