crewAI
crewAI copied to clipboard
flag to count tokens in crewai runs
Simple flag to count tokens in crewai
langchain supports https://python.langchain.com/docs/modules/model_io/llms/token_usage_tracking
Love it! adding it as feature accepted
@joaomdmoura @Biancamazzi Can I work on this?
@prabha-git yes, please, feel free to open a PR for that! thanks
@joaomdmoura @Biancamazzi - Thanks, I will be working on this. I am new to CrewAI, If you have any suggestions or recommendations let me know.
I was just looking for this! @prabha-git please keep us posted
print("\n\n\n\n\n\n\n Start::")
Starting the task execution process
with get_openai_callback() as cb: result = crew.kickoff() print(result) print(f"Total Tokens: {cb.total_tokens}") print(f"Prompt Tokens: {cb.prompt_tokens}") print(f"Completion Tokens: {cb.completion_tokens}") print(f"Total Cost (USD): ${cb.total_cost}") print("\n\n")
This works fine
@bishwarupdas where is this in the code that I can use it? thanks
print("\n\n\n\n\n\n\n Start::")
Starting the task execution process
with get_openai_callback() as cb: result = crew.kickoff() print(result) print(f"Total Tokens: {cb.total_tokens}") print(f"Prompt Tokens: {cb.prompt_tokens}") print(f"Completion Tokens: {cb.completion_tokens}") print(f"Total Cost (USD): ${cb.total_cost}") print("\n\n")
This works fine
@pebeid I am not sure where the code is written, but you need to add these lines when result = crew.kickoff() is called
@bishwarupdas I tried it and it worked. Thanks so much
Didn't works for me, I am using Azure open AI service, gpt 4 model ` def run(self): agents = QueryAgents(self.ticket_detail_tools) tasks = QueryTasks(self.ticket_detail_tools)
azure_llm = AzureChatOpenAI(
azure_endpoint=os.getenv("AZURE_ENDPOINT"),
api_version=os.getenv("AZURE_API_VERSION"),
api_key=os.getenv("AZURE_API_KEY"),
model=os.getenv("AZURE_API_MODEL"),
)
context_enrichment_agent = agents.context_enrichment_agent(azure_llm)
subquery_processor = agents.subquery_processor(azure_llm)
email_writer = agents.email_writer(azure_llm)
context_enrichment_task = tasks.context_enrichment_task(
context_enrichment_agent
)
response_generation_task = tasks.response_generation_task(
subquery_processor, [context_enrichment_task]
)
email_generation_task = tasks.email_generation_task(
email_writer, [response_generation_task]
)
# Assemble a crew
crew = Crew(
agents=[context_enrichment_agent, subquery_processor, email_writer],
tasks=[
context_enrichment_task,
response_generation_task,
email_generation_task,
],
verbose=True,
# process = Process.hierarchical,
# manager_llm=ChatOpenAI(model="gpt-4")
)
# Execute tasks
with get_openai_callback() as cb:
result = crew.kickoff()
print(result)
print(f"Total Tokens: {cb.total_tokens}")
print(f"Prompt Tokens: {cb.prompt_tokens}")
print(f"Completion Tokens: {cb.completion_tokens}")
print(f"Total Cost (USD): ${cb.total_cost}")
print("\n\n")
return result`
getting this
Total Tokens: 0 Prompt Tokens: 0 Completion Tokens: 0 Total Cost (USD): $0.0
Didn't works for me, I am using Azure open AI service, gpt 4 model ` def run(self): agents = QueryAgents(self.ticket_detail_tools) tasks = QueryTasks(self.ticket_detail_tools)
azure_llm = AzureChatOpenAI( azure_endpoint=os.getenv("AZURE_ENDPOINT"), api_version=os.getenv("AZURE_API_VERSION"), api_key=os.getenv("AZURE_API_KEY"), model=os.getenv("AZURE_API_MODEL"), ) context_enrichment_agent = agents.context_enrichment_agent(azure_llm) subquery_processor = agents.subquery_processor(azure_llm) email_writer = agents.email_writer(azure_llm) context_enrichment_task = tasks.context_enrichment_task( context_enrichment_agent ) response_generation_task = tasks.response_generation_task( subquery_processor, [context_enrichment_task] ) email_generation_task = tasks.email_generation_task( email_writer, [response_generation_task] ) # Assemble a crew crew = Crew( agents=[context_enrichment_agent, subquery_processor, email_writer], tasks=[ context_enrichment_task, response_generation_task, email_generation_task, ], verbose=True, # process = Process.hierarchical, # manager_llm=ChatOpenAI(model="gpt-4") ) # Execute tasks with get_openai_callback() as cb: result = crew.kickoff() print(result) print(f"Total Tokens: {cb.total_tokens}") print(f"Prompt Tokens: {cb.prompt_tokens}") print(f"Completion Tokens: {cb.completion_tokens}") print(f"Total Cost (USD): ${cb.total_cost}") print("\n\n") return result`
getting this
Total Tokens: 0 Prompt Tokens: 0 Completion Tokens: 0 Total Cost (USD): $0.0
It didn't work for me either, any other workarounds one could try?
This problem might be related to this issue: https://github.com/langchain-ai/langchain/issues/16798
any update here? also meet the problem.
I found the solution, just use the api below!
crew = Crew( agents=[agent1, agent2], tasks=[task1, task2],.....
print(crew.usage_metrics)
But then, Is there a way to check for task or agent wise token/cost estimation?
@bishwarupdas where get_openai_callback() is defined where to import it. thanks
@mahimairaja Yes, on utilities/token_counter_callback.py
you can see it be used as a callback for every Agent.
I'm working on possibly having an additional parameter in kickoff that will output the entire usage metrics at the end of the output.
kickoff usage
crew.kickoff(output_token_usage=True) # like inputs this is totally optional
OUTPUT:
wdyt of this solution ? @joaomdmoura @gvieira
Love it, merged!
any update here? also meet the problem.
I found the solution, just use the api below!
crew = Crew( agents=[agent1, agent2], tasks=[task1, task2],.....
print(crew.usage_metrics)
this shows 0 tokens for me
any update here? also meet the problem.
I found the solution, just use the api below! crew = Crew( agents=[agent1, agent2], tasks=[task1, task2],..... print(crew.usage_metrics)
this shows 0 tokens for me
Same cost: {'total_tokens': 0, 'prompt_tokens': 0, 'completion_tokens': 0, 'successful_requests': 0}
I am using Anthropic claude llm
@Syed-Sherjeel what version are you using, I'll try with the current main, we are about to cut this new version and it shoudl have a bunch of fixes on it
@Syed-Sherjeel what version are you using, I'll try with the current main, we are about to cut this new version and it shoudl have a bunch of fixes on it
hey @joaomdmoura thank you for reply here are my package version
crewai==0.41.1 crewai-tools==0.4.26 langchain==0.2.11 langchain-anthropic==0.1.20 langchain-cohere==0.1.9 langchain-community==0.2.10 langchain-core==0.2.23 langchain-experimental==0.0.63 langchain-openai==0.1.17 langchain-text-splitters==0.2.2