feature request for handling callbacks
I think callback functions are also needed to use semantic-kernel better. same concept here(link)
This feature will help solving TODOs. example is here
semantic-kernel/python/semantic_kernel/ai/open_ai/services/open_ai_chat_completion.py
async def complete_chat_async(
self, messages: List[Tuple[str, str]], request_settings: ChatRequestSettings
) -> str:
...
# TODO: tracking on token counts/etc.
return response.choices[0].message.content
At least i hope SKContext or ContextVariables has callback functions like on_update or on_setitem
If MS doesn't have a plan for this feature or it takes time, I can try.
I like the idea. We'll talk about it amongst the team first and follow up here.
@awharrison-28 @dluc @mkarle
@joowon-dm-snu we're going to track an item on our backlog for this!
If you have the time and bandwidth, we'd be happy to have you try out an implementation!
@stephentoub what do you think if we used this approach as a way to return extra information when calling AI, as opposed to emitting events?
@stephentoub what do you think if we used this approach as a way to return extra information when calling AI, as opposed to emitting events?
I'm not clear on what exactly is being proposed. I haven't spent much time exploring the python side of the house, so how would this translate into the .NET APIs? Is the suggestion that various APIs accept e.g. an Action<something> that's invoked by the method in response to various things happening, and if so, is there a concrete example of where that would be employed?
I'm not sure about dotnet because I haven't really looked into it. By providing callback stuff, it at least opens up the possibility of integration with other LLM monitoring tools. A prime example is wandb.
Check out the link below for a better understanding. https://docs.wandb.ai/guides/prompts
It also opens up native integration with AzureML by using the OSS MLflow APIs
here is an example: https://github.com/hwchase17/langchain/pull/4150
happy to provide the implementation / design guidance on it too - you can find me internally on the AML team if you're interested in this feature
@akshaya-a yes we'd love to see an implementation! thank you!
@akshaya-a Thank you, I've been busy and haven't had the time. How long do you think it will take?
@alexchaomander I can try to do this simply, is there anyone in your team who is working on it?
@joowon-dm-snu This is currently being discussed with the team. @akshaya-a was also saying he could implement something too. I'd say if you have the time and can implement a simple version, we can use that to help frame our discussions!
Go for it, I won't get to it until after //build week
Get Outlookhttps://aka.ms/AAb9ysg
From: Alex Chao @.> Sent: Wednesday, May 17, 2023 10:07:40 PM To: microsoft/semantic-kernel @.> Cc: Akshaya Annavajhala (AK) @.>; Mention @.> Subject: Re: [microsoft/semantic-kernel] feature request for handling callbacks (Issue #693)
@joowon-dm-snuhttps://github.com/joowon-dm-snu This is currently being discussed with the team. @akshaya-ahttps://github.com/akshaya-a was also saying he could implement something too. I'd say if you have the time and can implement a simple version, we can use that to help frame our discussions!
— Reply to this email directly, view it on GitHubhttps://github.com/microsoft/semantic-kernel/issues/693#issuecomment-1552404785, or unsubscribehttps://github.com/notifications/unsubscribe-auth/AD7ZDS75MPWRRVO57BF2SQLXGWVBZANCNFSM6AAAAAAXNT3GAI. You are receiving this because you were mentioned.Message ID: @.***>
Hi, any news on this?
Having a callback system would also help in debugging. Currently it is impossible to get intermediate prompts/input/output of a long chain of piped functions.
This is a possible structure for a handler, with hooks in the order they will be called:
class CallbackHandlerBase:
def on_pipeline_start(self, context: SKContext):
pass
def on_function_start(self, context: SKContext, func: SKFunctionBase):
pass
#only for semantic functions
def on_prompt_rendered(self, context: SKContext, func: SKFunctionBase, prompt: Union[str, List[Tuple[str, str]]]):
pass
#only if there is an error
def on_function_error(self, context: SKContext, func: SKFunctionBase):
pass
def on_function_end(self, context: SKContext, func: SKFunctionBase):
pass
def on_pipeline_end(self, context: SKContext):
pass
Some additional hooks should be added for the planner.
The pipeline has almost no information, perhaps the call to run_async could have an optional legend parameter just to identify a specific pipeline, and pass it as an argument to the handler, to be able to have a clearer trace.
The SKFunctions already have their identification in the definition.
The handlers could be set at the kernel level, and be used as defaults, or at the call level, to override for a specific invocation. All handlers will be invoked in order.
# default handlers
kernel.set_handlers([stdout_handler, thought_process_md])
# call using default handlers
result = await kernel.run_async( sf['func1'], nf['func2'], context=context)
# specific handler for call
result = await kernel.run_async( sf['func1'], nf['func2'], context=context, handlers=[thought_process_html])
If you agree, I could send a PR with an implementation of this approach.
Perhaps later, some general on_error, on_warning, on_info could also be added and use the handler in place of the log parameter, so that more consistent control of messages is possible.
A default logger_handler could be available to just write in the provided logger, and so not break the current behavior
Hi @alexchaomander, any comments on the proposal and PR? Having no way to track what's going and "debug" is a big stopper for me.
@ianchi Apologies for the late reply! Our Python team has been out on vacation so we haven't been able to get to these.
Overall I like the structure that you propose above. Can you and @joowon-dm-snu possibly collaborate on this? We'd love to see a proposed PR.
And I know @akshaya-a is interested in this too so he can provide a more in-depth commentary.
Once the team gets back, we can dig into this deeper!
Hi, I've already submitted PR #1630 with a draft implementation that I'm using meanwhile.
I'm open to any suggestions to improve and expand it. @joowon-dm-snu / @akshaya-a please share your view and use cases. Adding also @RogerBarreto , as he was mentioned in the PR as also being working on this.
Closing this issue per @shawncal comment on PR #1630.