openai-node
openai-node copied to clipboard
Passing global context into tools called by the runTools helper
Confirm this is a feature request for the Node library and not the underlying OpenAI API.
- [X] This is a feature request for the Node library
Describe the feature or improvement you're requesting
I currently have a pattern where I need to pass context to my tools to allow them to act on my app. For example:
function updateEvent(context: { eventId: string}, args: ArgsFromOpenAi) {
const event = await fetchEvent(eventId);
}
It'd be great if there were some way to pass a global context to the runner since the runner is passed into each function call. Then, I could do something like this:
function updateEvent( args: ArgsFromOpenAi, runner: ChatCompletionStreamingRunner<EventContext>) {
const { eventId } = runner.context;
const event = await fetchEvent(eventId);
}
Additional context
A workaround is to build my own runner that leverages the existing helpers. However, this is complicated because of the types integration.
Could you use a closure for this?
async function callFunctions() {
const context = {};
function updateEvent(args: ArgsFromOpenAi) {
const { eventId } = context;
const event = await fetchEvent(eventId);
}
await client.beta.messages.runTools({ tools: [{type: 'function', function: { function: updateEvent }}]})
}
I considered a closure like you wrote, but it would require merging tools defined across multiple files into a single (very) large file. I'm currently leveraging an inherited class to provide the closure, but running into some typing limitations with it that are worked around with casting and use of any.
LMK if I'm missing a simpler solution 🙏
Got it, that makes sense. Can you share more examples of what you've implemented or what you'd want this to look like / how you'd want to use it? E.g., where would you want to store context, and how would you expect to update it?
On Thu, Dec 21 2023 at 1:59 PM, Alan Yang @.***> wrote:
I considered a closure like you wrote, but it would require merging tools defined across multiple files into a single (very) large file. I'm currently leveraging a inherited class to provide the closure, but running into some typing limitations with it that are worked around with casting and use of any.
LMK if I'm missing a simpler solution 🙏
— Reply to this email directly, view it on GitHub https://github.com/openai/openai-node/issues/597#issuecomment-1866794931, or unsubscribe https://github.com/notifications/unsubscribe-auth/AAFL6LQQ2BJXR2BZF4XHVUDYKSBJVAVCNFSM6AAAAABA6WA6GGVHI2DSMVQWIX3LMV43OSLTON2WKQ3PNVWWK3TUHMYTQNRWG44TIOJTGE . You are receiving this because you commented.Message ID: @.***>
I'm imagining something that matches the lifecycle of runTools. The context should stay the same throughout that run. It is probably simplest to pass down a context through the runTools interface. E.g.
runTools(... context)
My implementation is kind of a hack that I worked up after realizing that runner was passed to each tool call. I simply added a context class variable to a class that extends ChatCompletionStreamingRunner. This doesn't work out well because other references to tool type don't expect my custom class.
Thanks. Could you provide a more complete code sample of what you're trying to do / how you're trying to use this? Including how you update and reference the context?
Have you tried using .bind(context) on the functions before passing them in, and referencing this for context? Or even a pattern like this?:
// in one file
const updateEvent = (context: Context) => async function updateEvent(args: ArgsFromOpenAI) {
const { eventId } = context;
const event = await fetchEvent(eventId);
}
// in another
const context = {…};
await client.beta.messages.runTools({ tools: [{
type: 'function', function: {
function: updateEvent(context),
name: 'updateEvent'
}
}]})
That pattern works! Thanks for the suggestion.
The caveat is that the tool definition would have to be managed within the scope of the context which requires a good bit of refactor for me.
You can close this issue if you think it's best that providing context not be built into the library!
Thanks!
Hmm, it might be optimal, but I'd like to provide the best possible experience. Would you be willing to share a more complete code sample of what you'd ideally like to see, including how you update & read from context?
Sorry for the late response-- I've been pushing to get the feature launched and left this as tech debt.
I was able to circle back to clean it up.
Here's how my implementation looks with the function closure:
// types.ts
interface ToolContext {
eventId: string
}
// eventManagerTools.ts
const updateEvent = (context: ToolContext) =>
async function updateEvent(eventDetails: UpdateEventArgs) {
const { eventId } = context;
...
};
export const eventManagerTools: Record<
EventManagerToolNames,
LLMFunctionWithContext<UpdateEventArgs | SetPrimaryVendorArgs>
> = {
[EventManagerToolNames.UPDATE_EVENT]: {
name: EventManagerToolNames.UPDATE_EVENT,
description: 'Updates event given one or more event details from customer. Only call when values have changed',
function: updateEvent,
parse: JSON.parse,
parameters: {
type: 'object',
properties: {
maxBudgetPerGuest: {
type: 'number',
description:
'Sets maximum budget guest. This should only include numeric values. If math is required, think through it and provide the output',
},
numDays: {
type: 'number',
description: 'Duration (in days) of the event',
},
...
},
},
},
};
// llmFacade.ts
export type LLMFunctionWithContext<Args extends object | string> = Omit<RunnableFunction<Args>, 'function'> & {
function: (context: BoompopToolContext) => RunnableFunction<Args>['function'];
};
export function toTools(llmFunctions: LLMFunctionWithContext<any>[], context: BoompopToolContext) {
// helper to convert to function-like definitions to tools
return llmFunctions.map(
(llmFunction) =>
({
type: 'function',
function: {
...llmFunction,
function: llmFunction.function(context),
},
})
);
}
export async function completionStreamWithTools(systemPrompt: string, tools: RunnableToolFunction<any>[]) {
// simplified as an example
const runner = ChatCompletionStreamingRunner.runTools(openai.chat.completions, {
messages,
model,
tools,
temperature,
stream: true,
});
}
// llmOrchestrator.ts
async function orchestrateResponse() {
const agent = {
tools: [...eventManagerTools]
}
// pass context available scoped to this single stream call
await completionStreamWithTools('Plan an event', toTools(agent.tools, { eventId }))
}
This feels fairly good. The only caveat is I have to override and maintain my own type and wrapper to convert to the function type that RunnableFunction expects. The abstraction gets a bit leaky
It would be great to simplify the above by being able to simply change completionStreamWithTools to a function signature like this:
export async function completionStreamWithTools(systemPrompt: string, tools: RunnableToolFunction<any>[], globalToolContext: ToolContext)
Then, I would call runTools like this:
const runner = ChatCompletionStreamingRunner.runTools(openai.chat.completions, {
messages,
model,
tools,
toolContext: globalToolContext,
temperature,
stream: true,
});
I imagine that the tool context would then be provided to each tool with something like this:
updateEvent({ ... }: Args, runner: Runner, toolContext: ToolContext)
Alternatively, the tool context could be destructured into Args, but that might be more complicated than it is worth.
Interesting. Thank you very much for sharing, this is quite helpful. The toolContext suggestion is interesting and we'll take that back to the team.
What do you think about something like this, so you don't have to subclass or write toTools?
export const eventManagerTools: Record<
EventManagerToolNames,
LLMFunctionWithContext<UpdateEventArgs | SetPrimaryVendorArgs>
> = {
[EventManagerToolNames.UPDATE_EVENT]: new RunnableFunction({
description: 'Updates event given one or more event details from customer. Only call when values have changed',
function: updateEvent(context),
// …
const runner = openai.beta.chat.completions.runTools({
messages,
model,
tools,
//…
Oh, that's a nice suggestion! Though, I think that might still lead to folks wanting to DRY up the RunnableFunction instantiation with a helper like toTools to reduce boilerplate
e.g. to DRY up this
export const eventManagerTools: Record<
EventManagerToolNames,
LLMFunctionWithContext<UpdateEventArgs | SetPrimaryVendorArgs>
> = {
[EventManagerToolNames.TOOL_A]: new RunnableFunction(...),
[EventManagerToolNames.TOOL_B]: new RunnableFunction(...),
[EventManagerToolNames.TOOL_C]: new RunnableFunction(...),
[EventManagerToolNames.TOOL_D]: new RunnableFunction(...),
...
}