openinference icon indicating copy to clipboard operation
openinference copied to clipboard

[BUG] Problematic token usage result in the trace

Open Howie-Arup opened this issue 1 year ago • 3 comments

Hi, I found that the token counting seems problematic. When calculating the total tokens in the image 1 below, I found it just adds together the numbers of tokens showed up in the trace detail like the image 2 below. However, it seems that there are repetitive adding for two same processes. For example, in the image 2, the OpenAI.Chat and ChatCompletion in the red rectangular consume the same number of tokens, and when I clicked them the input and output messages are the same, but it added the two numbers (i.e., 580+580 in this case) for calculating total tokens in the image 1.

Image 1: image Image 2: image

Environment:

  • OS: Windows 11
  • Notebook Runtime: Jupyter
  • Browser: Chrome
  • Version:
arize-phoenix             4.36.0                   pypi_0    pypi
arize-phoenix-evals       0.15.1                   pypi_0    pypi
arize-phoenix-otel        0.5.0                    pypi_0    pypi

Howie-Arup avatar Dec 03 '24 09:12 Howie-Arup

Hey @Howie-Arup thanks for the report. The double counting is definitely incorrect but I believe from looking at your screenshot above it's due to you having both openAI instrumentation and llamaindex instrumentation. At the current moment these two instrumentations don't compose together as we wanted to have llamaindex be fully instrumented (including llm calls). In this scenario, we recommend just using the llamaindex instrumentation as using both does double count.

We will explore better solutions for composition in the near future but hopefully this is enough to unblock you for now. Thanks for using Phoenix!

mikeldking avatar Dec 03 '24 13:12 mikeldking

@mikeldking Thanks for your reply! You are right I was having both openAI and llamaindex instrumentations in the llamaindex workflow. I have removed openAI instrumentation and it worked fine.

But in the llamaindex workflow I have one step with llamaindex ReAct agent and the other step with crew AI agent. I used the below codes for the crew AI according to the documentation but it turned out there is double counting as shown in the screenshot. I tried deleting the LiteLLMInstrumentor().instrument(tracer_provider=tracer_provider) but the it's the same. Is there something wrong? Thanks!

tracer_provider = register(project_name="my-llm-app")
LlamaIndexInstrumentor().instrument(tracer_provider=tracer_provider)
CrewAIInstrumentor().instrument(tracer_provider=tracer_provider)
LiteLLMInstrumentor().instrument(tracer_provider=tracer_provider) # I am using CrewAI>= 0.63.0

image

Howie-Arup avatar Dec 04 '24 01:12 Howie-Arup

@mikeldking Thanks for your reply! You are right I was having both openAI and llamaindex instrumentations in the llamaindex workflow. I have removed openAI instrumentation and it worked fine.

But in the llamaindex workflow I have one step with llamaindex ReAct agent and the other step with crew AI agent. I used the below codes for the crew AI according to the documentation but it turned out there is double counting as shown in the screenshot. I tried deleting the LiteLLMInstrumentor().instrument(tracer_provider=tracer_provider) but the it's the same. Is there something wrong? Thanks!

tracer_provider = register(project_name="my-llm-app")
LlamaIndexInstrumentor().instrument(tracer_provider=tracer_provider)
CrewAIInstrumentor().instrument(tracer_provider=tracer_provider)
LiteLLMInstrumentor().instrument(tracer_provider=tracer_provider) # I am using CrewAI>= 0.63.0

image

@mikeldking Sorry I found that that it's because I used jupyter notebook to run so the LiteLLMInstrumentor may be still there. After I re-opened the notebook and used the codes below there is no double counting.

LlamaIndexInstrumentor().instrument(tracer_provider=tracer_provider)
CrewAIInstrumentor().instrument(tracer_provider=tracer_provider)

But in this case I actually didn't follow the recommendation in the documentation as I removed the LiteLLMInstrumentor and the trace for the crew AI is like below screenshot which doesn't have the llm call to see the input and output messages?

image

Howie-Arup avatar Dec 04 '24 01:12 Howie-Arup