crewAI
crewAI copied to clipboard
Tokens count
How to determine tokens usage ?
You could use callbacks now, but I want to add a better way to report on it adding this as feature accepted
@joaomdmoura Any update on this feature? Need this.
btw, Awesome job!
crew = Crew( agents=[agent1, agent2], tasks=[task1, task2],.....
print(crew.usage_metrics)
I believe I found a bug in the Agent
class's set_agent_executor
method where the callbacks are added, which make this token count observation possible. I found that the TokenCalcHandler was being called 3 separate times for each interaction with the LLM when only one was expected. This was causing the usage to be over-reported.
By ensuring that only a single instance of TokenCalcHandler
can be appended to the callbacks of the LLM, I was able to verify that only a single usage callback was triggered. While my understanding of the internals of this library may be limited, I believe that this fix may be useful. ~~If someone can verify the hypothesis or current behaviour, I'd be happy to open a PR.~~
I've opened a PR for this, but would like some help verifying this hypothesis please. Thanks everyone!
Here's the modified method:
@model_validator(mode="after")
def set_agent_executor(self) -> "Agent":
"""set agent executor is set."""
if hasattr(self.llm, "model_name"):
token_handler = TokenCalcHandler(self.llm.model_name, self._token_process)
# Ensure self.llm.callbacks is a list
if not isinstance(self.llm.callbacks, list):
self.llm.callbacks = []
# Check if an instance of TokenCalcHandler already exists in the list
if not any(isinstance(handler, TokenCalcHandler) for handler in self.llm.callbacks):
self.llm.callbacks.append(token_handler)
if not self.agent_executor:
if not self.cache_handler:
self.cache_handler = CacheHandler()
self.set_cache_handler(self.cache_handler)
return self