Capability to debug model input/output
I haven't found an easy way within the library to log messages sent to/from the model. When debugging prompts and trying to ensure the model's response properly serializes to a Pydantic model, having this level of logging available would be very helpful.
Hi @weaversam8 thanks for the issue. Here are a few things that might be useful to you.
-
Prompt-functions expose a
formatmethod that takes the same arguments as the function itself but returns the string prompt that will be sent to the model.from magentic import prompt prompt("Say hello {n} times") ef say_hello(n: int) -> str: ... ay_hello.format(2) 'Say hello 2 times' -
When the model output fails to be parsed by magentic, the exception traceback contains the pydantic validation error which shows what part of the model output caused the issue. An example showing the relevant part of the traceback is https://github.com/jackmpcollins/magentic/issues/61#issuecomment-1809598854
-
openai uses the Python logger so you can enable debug logs. These show the request that gets sent to OpenAI.
import logging from magentic import prompt logging.basicConfig(level=logging.DEBUG) def plus(a: int, b: int) -> int: return a + b prompt( "Say hello {n} times", functions=[plus], ef say_hello(n: int) -> str: ... ay_hello(2) # ... EBUG:openai._base_client:Request options: {'method': 'post', 'url': '/chat/completions', 'files': None, 'json_data': {'messages': [{'role': 'user', 'content': 'Say hello 2 times'}], 'model': 'gpt-3.5-turbo', 'functions': [{'name': 'plus', 'parameters': {'properties': {'a': {'title': 'A', 'type': 'integer'}, 'b': {'title': 'B', 'type': 'integer'}}, 'required': ['a', 'b'], 'type': 'object'}}], 'max_tokens': None, 'stream': True, 'temperature': None}} # ...
Please let me know if there's something you think would be useful in addition to these. It seems like it would be useful for magentic to have its own debug logging.
Logfire is all you need 🔥
@weaversam8 @rawwerks
https://github.com/jackmpcollins/magentic/releases/tag/v0.28.0 adds instrumentation for LogFire 🪵🔥 / OpenTelemetry which allows you to easily see all requests to the LLM provider APIs.
Just need to install logfire and add the code to configure it
pip install logfire
import logfire
logfire.configure(send_to_logfire=False) # Or True to use the Logfire service
logfire.instrument_openai() # optional, to trace OpenAI API calls
# logfire.instrument_anthropic() # optional, to trace Anthropic API calls
See the new docs page on Logging and Tracing https://magentic.dev/logging-and-tracing/ and the Logfire docs on their OpenAI integration https://docs.pydantic.dev/logfire/integrations/openai/
Please let me know how how this goes for you