llama_index icon indicating copy to clipboard operation
llama_index copied to clipboard

[Bug]: Unable to view token counts from Bedrock LLM instance and phoenix Observability

Open chaitanya-nit opened this issue 10 months ago • 5 comments

Bug Description

I am unable to see token counts for LLM models instantiated using Bedrock class. I am using phoenix as Observability tool.

Version

0.10.20

Steps to Reproduce

You can use the starter 5 lines code to reproduce this. import phoenix as px from llama_index.core import set_global_handler from llama_index.llms.bedrock import Bedrock from llama_index.core import Settings

session = px.launch_app() set_global_handler("arize_phoenix") profile_name = "Your aws profile name" llm= Bedrock( model="amazon.titan-text-express-v1", profile_name=profile_name ) Settings.llm = llm llm.complete("What is the capital of france?")

Relevant Logs/Tracbacks

No response

chaitanya-nit avatar Apr 23 '24 06:04 chaitanya-nit

To address the issue of not being able to see token counts for LLM models instantiated using the Bedrock class and using Phoenix as the Observability tool, you should integrate the TokenCountingHandler into your setup. This handler is designed to track and report token usage, which seems to be missing in your current configuration. Here's how you can modify your code:

import phoenix as px
from llama_index.core import set_global_handler, CallbackManager
from llama_index.core.callbacks import TokenCountingHandler
from llama_index.llms.bedrock import Bedrock
from llama_index.core import Settings
import tiktoken

# Setup TokenCountingHandler with a default tokenizer
token_counter = TokenCountingHandler(tokenizer=tiktoken.Tokenizer().tokenize)

# Create a CallbackManager with the TokenCountingHandler
callback_manager = CallbackManager([token_counter])

# Set the global handler
set_global_handler("arize_phoenix")

session = px.launch_app()

profile_name = "Your aws profile name"
llm= Bedrock(
    model="amazon.titan-text-express-v1", profile_name=profile_name
)
Settings.llm = llm
Settings.callback_manager = callback_manager

llm.complete("What is the capital of france?")

# To view the token counts
print(f"Total Embedding Token Count: {token_counter.total_embedding_token_count}")
print(f"Total LLM Prompt Token Count: {token_counter.prompt_llm_token_count}")
print(f"Total LLM Completion Token Count: {token_counter.completion_llm_token_count}")
print(f"Total LLM Token Count: {token_counter.total_llm_token_count}")

This setup ensures that the TokenCountingHandler is active during LLM operations, allowing you to access various token counts after running the complete method. Make sure your Phoenix Observability tool is configured to display these token counts.

If issues persist, consider updating to the latest version of LlamaIndex, as newer versions may have enhancements or bug fixes related to token counting. For further details, refer to the Token Counting Migration Guide and the TokenCountingHandler source code in the LlamaIndex repository.

Details

Sources


About Dosu This response is meant to be useful and save you time. It is not meant to be a precise solution, but rather a starting point for your own research.

Help me be more useful! Please leave a 👍 if this is helpful and 👎 if it is irrelevant.

To continue the conversation, mention @dosu.

dosubot[bot] avatar Apr 23 '24 06:04 dosubot[bot]

@dosu I am aware that the token counting handler is sufficient for estimating the token counts. But this issue is related to arize_phoenix observability integration is not working with Bedrock models of llama-index. Hope you got the context now.

chaitanya-nit avatar Apr 24 '24 07:04 chaitanya-nit

To resolve the issue with arize_phoenix observability integration not working with Bedrock models in llama-index, ensure the following steps are correctly implemented:

  1. Ensure Arize Phoenix Installation: Confirm that Arize Phoenix is correctly installed by running pip install -q arize-phoenix. This step is crucial for the integration to work properly.

  2. Integration of TokenCountingHandler: Make sure that the TokenCountingHandler is correctly integrated into your setup. This handler is essential for tracking token counts, which are necessary for the observability tool to function as expected.

  3. Configuration of Callback Handler: Verify that the arize_phoenix_callback_handler is correctly configured to utilize the TokenCountingHandler. This involves ensuring that token count data is accurately passed to the OpenInferenceTraceCallbackHandler.

  4. Debugging and Logs: If problems persist, review the logs for any errors related to Arize Phoenix integration or the instantiation of handlers. This can help identify any misconfigurations or issues in the integration process.

  5. Consult Documentation and Support: If the issue remains unresolved, consult the documentation for both Llama-Index and Arize Phoenix. If necessary, reach out to their support teams for further assistance.

Ensure your environment and code are correctly set up for the integration, especially focusing on the correct installation of Arize Phoenix and the proper configuration of the TokenCountingHandler. This should address the issue with observability integration not working as expected with Bedrock models.

Details

Sources


About Dosu This response is meant to be useful and save you time. It is not meant to be a precise solution, but rather a starting point for your own research.

Help me be more useful! Please leave a 👍 if this is helpful and 👎 if it is irrelevant.

To continue the conversation, mention @dosu.

dosubot[bot] avatar Apr 24 '24 07:04 dosubot[bot]

This seems like something to raise on the https://github.com/Arize-ai/phoenix project instead.

brycecf avatar May 08 '24 12:05 brycecf

If the integration issue persists specifically with Arize Phoenix and Bedrock models from llama-index, it might indeed be beneficial to raise this concern directly on the Arize Phoenix project page. There, you can provide details of the issue, including the steps you've already taken and any relevant logs or error messages. This approach allows the developers who are more familiar with the internals of Arize Phoenix to offer targeted assistance and potentially identify if there's a bug or a missing feature that needs to be addressed. Additionally, checking the project's existing issues and discussions might reveal if others have encountered similar problems and if any solutions or workarounds have been proposed.

Details

About Dosu This response is meant to be useful and save you time. It is not meant to be a precise solution, but rather a starting point for your own research.

Help me be more useful! Please leave a 👍 if this is helpful and 👎 if it is irrelevant.

To continue the conversation, mention @dosu.

dosubot[bot] avatar May 08 '24 12:05 dosubot[bot]

I am getting following error: ImportError: cannot import name 'CallbackManager' from 'llama_index.core'

(llama-index : Version: 0.11.1)

bhavesh-kf avatar Aug 25 '24 14:08 bhavesh-kf