langchain icon indicating copy to clipboard operation
langchain copied to clipboard

Feat: Adding token usage for ChatOpenAI. Resolves #1519, resolves #1429

Open stephenleo opened this issue 2 years ago • 4 comments

  • Added in the on_llm_end callback into ChatOpenAI's __call__
  • fixed multiple linting errors arising from above change
  • added in tests to chat_models/test_openai.py

stephenleo avatar Mar 18 '23 03:03 stephenleo

how to make the total_tokens work for streaming mode as well? like chat = ChatOpenAI(streaming=True, callback_manager=CallbackManager([StreamingStdOutCallbackHandler()]), verbose=True, temperature=0.7) I tested the commit finding it only works for non-streaming mode.

lenohard avatar Mar 18 '23 06:03 lenohard

That's right, the streaming mode doesn't support token_usage even for the OpenAI llm

https://langchain.readthedocs.io/en/latest/modules/llms/streaming_llm.html?highlight=token_usage

stephenleo avatar Mar 19 '23 03:03 stephenleo

on_llm_end is already called on the BaseLanguageModel class, we just need to propogate things up there

hwchase17 avatar Mar 19 '23 17:03 hwchase17

https://github.com/hwchase17/langchain/pull/1785

hwchase17 avatar Mar 19 '23 17:03 hwchase17

Awesome! #1785 adds much of the functionality of this PR. Will debug the 0 token_usage on the new master and re-submit a cleaner PR

stephenleo avatar Mar 20 '23 02:03 stephenleo