mem0
mem0 copied to clipboard
Retrieve token usage from LLMs requests
🚀 The feature
Retrieve the token usage from LLMs requests like prompt, completion, and total number of tokens. Currently embedchain only returns the raw message from LLMs.
Motivation, pitch
Since many paid LLMs charge by token usage, it is hard to predict and calculate the costs of using embedchain with a LLM like OpenAI. We might need to calculate costs per customer, for example, and better control the usage of product.
Up
hey @jrobertogram thanks for highlighting this. We will get back shortly here.