instructor icon indicating copy to clipboard operation
instructor copied to clipboard

[PROPOSAL] Allow the client to pass a callback that receives arguments Instructor sends to LLM providers

Open voberoi opened this issue 1 year ago • 7 comments

Instructor doesn't expose an easy way to see or use the raw arguments that Instructor finally sends over to a language model: #907, #767, https://github.com/jxnl/instructor/issues/888#issuecomment-2262366959)

This PR is a sketch of an idea that solves this problem in a way that:

  • Allows instructor clients to receive these arguments and do whatever they want with them, every time instructor calls out to an language model provider (including every retry).
  • Doesn't require first-class integrations between a third party logging service and an underlying client for logging.

It does this by allowing users to pass in a callback, like so:

def log_llm_args(args: list[Any], kwargs: dict[str, Any]):
    logger.info(f"LLM args: {args}")
    logger.info(f"LLM kwargs: {kwargs}")


def main():
    client = instructor.from_anthropic(
        Anthropic(api_key=os.getenv("ANTHROPIC_API_KEY"))
    )

    user_info = client.chat.completions.create(
        model="claude-3-haiku-20240307",
        response_model=UserInfo,
        messages=[
            {
                "role": "system",
                "content": "You are an expert at extracting information about users from sentences.",
            },
            {"role": "user", "content": "John Doe is 30 years old."},
        ],
        max_tokens=4096,
        max_retries=3,
        provider_args_callback=log_llm_args, # This will get called every time `instructor` calls out to an LLM provider with the same args/kwargs.
    )

    print(user_info.name)
    print(user_info.age)


if __name__ == "__main__":
    main()

The code below is a working implementation for Anthropic's synchronous client, that applies to create and create_with_completion.

This is more-or-less what it'd look like for other providers if this is something you want to do.

An extension we could add is to provide an attempt_num and intermediate completions received before retries, which might be helpful for some use cases.

I'm personally keen to have this kind of functionality in instructor, some way or another (vs. just logging)

If the strategy below works for you, I can submit a PR that works with sync/async and the major providers (OpenAI/Anthropic/Google).

If not, would love to know either:

a) This is not a problem you care to solve. b) You also want to solve it, but differently.

Thanks!


:rocket: This description was created by Ellipsis for commit ed5aeb98ef30dba0cc0a5c1ddc58d09eb337aa9b

Summary:

Introduced provider_args_callback to log or handle arguments passed to language model providers in instructor.

Key points:

  • Added provider_args_callback parameter to create, create_with_completion, and other relevant methods in instructor/client.py.
  • Updated patch function in instructor/patch.py to handle provider_args_callback.
  • Modified retry_sync and retry_async functions in instructor/retry.py to call provider_args_callback with request arguments and keyword arguments.
  • Allows clients to pass a callback to log or handle arguments sent to language model providers.

Generated with :heart: by ellipsis.dev

voberoi avatar Aug 06 '24 21:08 voberoi

Hmm personally I think this could be solved by using something like logfire or langsmith which would help capture and log the information a lot better. Have you tried those options?

ivanleomk avatar Aug 07 '24 09:08 ivanleomk

I use third-party logging tools like those. I don't use those ones in particular, but I'm familiar with them.

I don't have issues with such tools. They're important in production use cases.

However, there are scenarios where:

  1. I want to access these messages to log them locally or save them elsewhere, without using a third party logging tool.
  2. I want more visibility into retries.
  3. The logging provider I am using has not instrumented the underlying client.

I also don't think the default mode to access whatever instructor sends to a provider should be to send data to a third-party provider that happens to have instrumented instructor or the underlying client.

voberoi avatar Aug 07 '24 09:08 voberoi

What are your thoughts on attaching the callbacks to the constructor of the client rather than the create call?

from_openai(client, callbacks=[])

jxnl avatar Aug 07 '24 13:08 jxnl

That works, but something to consider is that clients will frequently want to tie these callbacks to a specific invocation, like so:

def get_logging_callback(some_id: str):
    def log_llm_args(args: list[Any], kwargs: dict[str, Any]):
        logger.info(f"Calling LLM provider for invocation {some_id}")
        logger.info(f"LLM args: {args}")
        logger.info(f"LLM kwargs: {kwargs}")
    return log_llm_args


def invoke_llm_for_task(task_id: str):
    client = instructor.from_anthropic(
        Anthropic(api_key=os.getenv("ANTHROPIC_API_KEY"))
    )

    user_info = client.chat.completions.create(
        model="claude-3-haiku-20240307",
        response_model=UserInfo,
        messages=[
            {
                "role": "system",
                "content": "You are an expert at extracting information about users from sentences.",
            },
            {"role": "user", "content": "John Doe is 30 years old."},
        ],
        max_tokens=4096,
        max_retries=3,
        provider_args_callback=get_logging_callback(task_id)
    )

I always instantiate the client before invocations or a span of them in my code base, so attaching the callbacks to the constructor is easy.

If instructor users are passing instantiated instructor clients around, this change might be more challenging for them to take advantage of.

Or, if you decide to make clients stateful such that passing a previously-instantiated client around is the recommended way to use the library, that might make things challenging later.

I don't see why you'd do that. Still, I think attaching the callbacks to the create* calls themselves might make more sense.

Your call.

Some other considerations unrelated to the above:

  • Should these callbacks be called before or after an invocation? I think before, since clients will know what instructor sent over to LLM providers whether or not the call failed (they can catch exceptions).
  • Should the callbacks receive an attempt_num? I don't see why not, it's useful information and the only way to figure it out otherwise is by looking through messages (which will be formatted differently for every provider).
  • Should we add some DEBUG logging at the same point the callback is called? This is a common request (see the issues/comments I link to above) and doing this out of the box might be a nice QoL addition. But we don't have to, and can instead leave it to users to do this themselves with this callback (with an example in the docs).

voberoi avatar Aug 07 '24 16:08 voberoi

Hi! Bumping to see if this is something you'd want to do.

voberoi avatar Aug 15 '24 15:08 voberoi

sorry been out.

i want to do this but almost in the client, vs the create call. what are your thoughts on this?

jxnl avatar Sep 23 '24 17:09 jxnl

we're going to clean this up a bit with a refactor that will make it easier

jxnl avatar Oct 04 '24 22:10 jxnl

Hey! Sorry just saw this.

Yeah I'm fine with that strategy but also, keep me posted on the refactor and I can do it after.

Thank you!

voberoi avatar Oct 09 '24 12:10 voberoi

Hey ! Do you guies have updates about this ?

Flopsky avatar Nov 02 '24 22:11 Flopsky

Hey David did you check out the new hooks feature? It should be able to do this

On Sun, 3 Nov 2024 at 6:57 AM, David K. @.***> wrote:

Hey ! Do you guies have updates about this ?

— Reply to this email directly, view it on GitHub https://github.com/instructor-ai/instructor/pull/911#issuecomment-2453218242, or unsubscribe https://github.com/notifications/unsubscribe-auth/AK5D6RVX3KKV6I7G3PY6EITZ6VKFTAVCNFSM6AAAAABMDD4HWKVHI2DSMVQWIX3LMV43OSLTON2WKQ3PNVWWK3TUHMZDINJTGIYTQMRUGI . You are receiving this because you commented.Message ID: @.***>

ivanleomk avatar Nov 03 '24 05:11 ivanleomk

Hey yes, it's really great 👍 for more precise logging ! But do you know about the abelity to get the attempt_num, even when the call was a success ? I could help to benchmark the quality of the prompt in term of minimal call optimisation.

Hey David did you check out the new hooks feature? It should be able to do this On Sun, 3 Nov 2024 at 6:57 AM, David K. @.> wrote: Hey ! Do you guies have updates about this ? — Reply to this email directly, view it on GitHub <#911 (comment)>, or unsubscribe https://github.com/notifications/unsubscribe-auth/AK5D6RVX3KKV6I7G3PY6EITZ6VKFTAVCNFSM6AAAAABMDD4HWKVHI2DSMVQWIX3LMV43OSLTON2WKQ3PNVWWK3TUHMZDINJTGIYTQMRUGI . You are receiving this because you commented.Message ID: @.>

Flopsky avatar Nov 03 '24 19:11 Flopsky

Closing this PR since we've introduced hooks for now

ivanleomk avatar Nov 21 '24 00:11 ivanleomk