langchain
langchain copied to clipboard
Feat: Adding token usage for ChatOpenAI. Resolves #1519, resolves #1429
- Added in the
on_llm_endcallback into ChatOpenAI's__call__ - fixed multiple linting errors arising from above change
- added in tests to
chat_models/test_openai.py
how to make the total_tokens work for streaming mode as well? like
chat = ChatOpenAI(streaming=True, callback_manager=CallbackManager([StreamingStdOutCallbackHandler()]), verbose=True, temperature=0.7)
I tested the commit finding it only works for non-streaming mode.
That's right, the streaming mode doesn't support token_usage even for the OpenAI llm
https://langchain.readthedocs.io/en/latest/modules/llms/streaming_llm.html?highlight=token_usage
on_llm_end is already called on the BaseLanguageModel class, we just need to propogate things up there
https://github.com/hwchase17/langchain/pull/1785
Awesome! #1785 adds much of the functionality of this PR. Will debug the 0 token_usage on the new master and re-submit a cleaner PR