crewAI icon indicating copy to clipboard operation
crewAI copied to clipboard

flag to count tokens in crewai runs

Open jniederriter opened this issue 1 year ago • 14 comments

Simple flag to count tokens in crewai

langchain supports https://python.langchain.com/docs/modules/model_io/llms/token_usage_tracking

jniederriter avatar Jan 20 '24 15:01 jniederriter

Love it! adding it as feature accepted

joaomdmoura avatar Jan 21 '24 19:01 joaomdmoura

@joaomdmoura @Biancamazzi Can I work on this?

prabha-git avatar Jan 23 '24 04:01 prabha-git

@prabha-git yes, please, feel free to open a PR for that! thanks

Biancamazzi avatar Jan 23 '24 04:01 Biancamazzi

@joaomdmoura @Biancamazzi - Thanks, I will be working on this. I am new to CrewAI, If you have any suggestions or recommendations let me know.

prabha-git avatar Jan 23 '24 14:01 prabha-git

I was just looking for this! @prabha-git please keep us posted

dawid-ai avatar Jan 25 '24 15:01 dawid-ai

print("\n\n\n\n\n\n\n Start::")

Starting the task execution process

with get_openai_callback() as cb: result = crew.kickoff() print(result) print(f"Total Tokens: {cb.total_tokens}") print(f"Prompt Tokens: {cb.prompt_tokens}") print(f"Completion Tokens: {cb.completion_tokens}") print(f"Total Cost (USD): ${cb.total_cost}") print("\n\n")

This works fine

bishwarupdas avatar Feb 14 '24 11:02 bishwarupdas

@bishwarupdas where is this in the code that I can use it? thanks

print("\n\n\n\n\n\n\n Start::")

Starting the task execution process

with get_openai_callback() as cb: result = crew.kickoff() print(result) print(f"Total Tokens: {cb.total_tokens}") print(f"Prompt Tokens: {cb.prompt_tokens}") print(f"Completion Tokens: {cb.completion_tokens}") print(f"Total Cost (USD): ${cb.total_cost}") print("\n\n")

This works fine

pebeid avatar Feb 21 '24 12:02 pebeid

@pebeid I am not sure where the code is written, but you need to add these lines when result = crew.kickoff() is called

bishwarupdas avatar Feb 21 '24 12:02 bishwarupdas

@bishwarupdas I tried it and it worked. Thanks so much

pebeid avatar Feb 21 '24 23:02 pebeid

Didn't works for me, I am using Azure open AI service, gpt 4 model ` def run(self): agents = QueryAgents(self.ticket_detail_tools) tasks = QueryTasks(self.ticket_detail_tools)

    azure_llm = AzureChatOpenAI(
        azure_endpoint=os.getenv("AZURE_ENDPOINT"),
        api_version=os.getenv("AZURE_API_VERSION"),
        api_key=os.getenv("AZURE_API_KEY"),
        model=os.getenv("AZURE_API_MODEL"),
    )

    context_enrichment_agent = agents.context_enrichment_agent(azure_llm)
    subquery_processor = agents.subquery_processor(azure_llm)
    email_writer = agents.email_writer(azure_llm)

    context_enrichment_task = tasks.context_enrichment_task(
        context_enrichment_agent
    )
    response_generation_task = tasks.response_generation_task(
        subquery_processor, [context_enrichment_task]
    )
    email_generation_task = tasks.email_generation_task(
        email_writer, [response_generation_task]
    )

    # Assemble a crew
    crew = Crew(
        agents=[context_enrichment_agent, subquery_processor, email_writer],
        tasks=[
            context_enrichment_task,
            response_generation_task,
            email_generation_task,
        ],
        verbose=True,
        # process = Process.hierarchical,
        # manager_llm=ChatOpenAI(model="gpt-4")
    )

    # Execute tasks
    with get_openai_callback() as cb:
        result = crew.kickoff()
        print(result)
        print(f"Total Tokens: {cb.total_tokens}")
        print(f"Prompt Tokens: {cb.prompt_tokens}")
        print(f"Completion Tokens: {cb.completion_tokens}")
        print(f"Total Cost (USD): ${cb.total_cost}")
        print("\n\n")
        return result`
        

getting this Total Tokens: 0 Prompt Tokens: 0 Completion Tokens: 0 Total Cost (USD): $0.0

abhi050400 avatar Apr 04 '24 07:04 abhi050400

Didn't works for me, I am using Azure open AI service, gpt 4 model ` def run(self): agents = QueryAgents(self.ticket_detail_tools) tasks = QueryTasks(self.ticket_detail_tools)

    azure_llm = AzureChatOpenAI(
        azure_endpoint=os.getenv("AZURE_ENDPOINT"),
        api_version=os.getenv("AZURE_API_VERSION"),
        api_key=os.getenv("AZURE_API_KEY"),
        model=os.getenv("AZURE_API_MODEL"),
    )

    context_enrichment_agent = agents.context_enrichment_agent(azure_llm)
    subquery_processor = agents.subquery_processor(azure_llm)
    email_writer = agents.email_writer(azure_llm)

    context_enrichment_task = tasks.context_enrichment_task(
        context_enrichment_agent
    )
    response_generation_task = tasks.response_generation_task(
        subquery_processor, [context_enrichment_task]
    )
    email_generation_task = tasks.email_generation_task(
        email_writer, [response_generation_task]
    )

    # Assemble a crew
    crew = Crew(
        agents=[context_enrichment_agent, subquery_processor, email_writer],
        tasks=[
            context_enrichment_task,
            response_generation_task,
            email_generation_task,
        ],
        verbose=True,
        # process = Process.hierarchical,
        # manager_llm=ChatOpenAI(model="gpt-4")
    )

    # Execute tasks
    with get_openai_callback() as cb:
        result = crew.kickoff()
        print(result)
        print(f"Total Tokens: {cb.total_tokens}")
        print(f"Prompt Tokens: {cb.prompt_tokens}")
        print(f"Completion Tokens: {cb.completion_tokens}")
        print(f"Total Cost (USD): ${cb.total_cost}")
        print("\n\n")
        return result`

getting this Total Tokens: 0 Prompt Tokens: 0 Completion Tokens: 0 Total Cost (USD): $0.0

It didn't work for me either, any other workarounds one could try?

gabrielfior avatar Apr 04 '24 17:04 gabrielfior

This problem might be related to this issue: https://github.com/langchain-ai/langchain/issues/16798

yuripourre avatar Apr 06 '24 17:04 yuripourre

any update here? also meet the problem.

I found the solution, just use the api below!

crew = Crew( agents=[agent1, agent2], tasks=[task1, task2],.....

print(crew.usage_metrics)

yycsu avatar Apr 15 '24 13:04 yycsu

But then, Is there a way to check for task or agent wise token/cost estimation?

mahimairaja avatar May 09 '24 16:05 mahimairaja

@bishwarupdas where get_openai_callback() is defined where to import it. thanks

khadimhussain0 avatar May 22 '24 18:05 khadimhussain0

@mahimairaja Yes, on utilities/token_counter_callback.py you can see it be used as a callback for every Agent.

I'm working on possibly having an additional parameter in kickoff that will output the entire usage metrics at the end of the output.

lorenzejay avatar Jun 10 '24 23:06 lorenzejay

kickoff usage

crew.kickoff(output_token_usage=True) # like inputs this is totally optional

OUTPUT: Screenshot 2024-06-11 at 12 01 35 PM

wdyt of this solution ? @joaomdmoura @gvieira

lorenzejay avatar Jun 11 '24 19:06 lorenzejay

Love it, merged!

joaomdmoura avatar Jun 12 '24 17:06 joaomdmoura

any update here? also meet the problem.

I found the solution, just use the api below!

crew = Crew( agents=[agent1, agent2], tasks=[task1, task2],.....

print(crew.usage_metrics)

this shows 0 tokens for me

amansingh9097 avatar Jun 21 '24 12:06 amansingh9097

any update here? also meet the problem.

I found the solution, just use the api below! crew = Crew( agents=[agent1, agent2], tasks=[task1, task2],..... print(crew.usage_metrics)

this shows 0 tokens for me

Same cost: {'total_tokens': 0, 'prompt_tokens': 0, 'completion_tokens': 0, 'successful_requests': 0}

I am using Anthropic claude llm

Syed-Sherjeel avatar Jul 29 '24 07:07 Syed-Sherjeel

@Syed-Sherjeel what version are you using, I'll try with the current main, we are about to cut this new version and it shoudl have a bunch of fixes on it

joaomdmoura avatar Jul 29 '24 08:07 joaomdmoura

@Syed-Sherjeel what version are you using, I'll try with the current main, we are about to cut this new version and it shoudl have a bunch of fixes on it

hey @joaomdmoura thank you for reply here are my package version

crewai==0.41.1 crewai-tools==0.4.26 langchain==0.2.11 langchain-anthropic==0.1.20 langchain-cohere==0.1.9 langchain-community==0.2.10 langchain-core==0.2.23 langchain-experimental==0.0.63 langchain-openai==0.1.17 langchain-text-splitters==0.2.2

Syed-Sherjeel avatar Jul 29 '24 08:07 Syed-Sherjeel