agents
agents copied to clipboard
add logging callback for llm completion events
Now consumers can log all LLM completion requests like this:
def will_log_completion_event(cht_ctx, collected_text, tool_calls, interrupted):
my_logger(cht_ctx, collected_text, tool_calls, interrupted)
assistant = VoiceAssistant(
vad=silero.VAD.load(),
stt=deepgram.STT(),
llm=openai.LLM(),
tts=openai.TTS()
chat_ctx=initial_ctx,
will_log_completion_event=will_log_completion_event
)
It gets the updated cht_ctx
from will_synthesize_agent_reply
.
In general I would like to build out more of this type of audit trailing. Other things I am thinking about are first class support for testing (see hamming.ai or similar).