Support for structured outputs or function calling
function calling is supported, here's the example.
https://github.com/Azure-Samples/contoso-creative-writer/blob/fe9478e42d4f190b820bd491651e7df2c35adb35/src/api/agents/researcher/researcher.prompty#L15
We basically use this syntax to load json file.
${file:functions.json}
I haven't tried the full response_format but I think it should work too.
Any extra model configuration parameters can also be passed in this way. So give this a try:
model:
...
parameters:
max_tokens: 1024
response_format: ${file:xxx.json}
Thanks @wayliums.
function calling is supported, here's the example.
https://github.com/Azure-Samples/contoso-creative-writer/blob/fe9478e42d4f190b820bd491651e7df2c35adb35/src/api/agents/researcher/researcher.prompty#L15
We basically use this syntax to load json file.
${file:functions.json}I haven't tried the full response_format but I think it should work too.
Any extra model configuration parameters can also be passed in this way. So give this a try:
model: ... parameters: max_tokens: 1024 response_format: ${file:xxx.json}
Hey @wayliums, structured outputs won't work: "Invalid value: 'MyBaseModel'. Supported values are: 'json_object' and 'text'." I created a pydantic BaseModel but Prompty only care about the 2 above types.
@sethjuarez and @wayliums are you taking PR? I've created a local branch with a fix candidate as I found the root cause and a workable example.
In general, we basically **kwargs in model params and configuration. Is the run failing in the runtime or the extension? I know we use function calling in other samples. As far as the response format is concerned - yeah, I'm not sure we are enforcing it "quite" yet in the processor Invokers. Would love to see what you built for it!
In general, we basically
**kwargsin model params and configuration. Is the run failing in the runtime or the extension? I know we use function calling in other samples. As far as the response format is concerned - yeah, I'm not sure we are enforcing it "quite" yet in the processor Invokers. Would love to see what you built for it!
I got inspiration from these other samples you shared to do my function calling but it's failing in the runtime and the extension due to 2 problems:
- Both VSC extension and standalone Prompty need to upgrade the Azure OpenAI executor to v1.43.0 and invoke
AzureOpenAI.beta.chat.completions.parsefor the structured output to work - Parsing of the json files:
response_format: ${file:structure.json} and tools: ${file:functions.json}by Prompty VSC extension didn't make the BaseModel class and the required"strict": Trueand"additionalProperties": Falsefunction parameters to be interpreted as text instead of respectively Pydantic object and a boolean. So to use structured outputs and tools I override/merge parameters in my PromptFlow flow code (won't work in Prompty VSC extension) using standalone Prompty runtime:
...
parameters = {
"response_format": MyPydanticBaseModel,
"tools": [{
"type": "function",
"function": {
"name": "my-function",
"strict": True,
...
"additionalProperties": False,
}
}
}]
}
result = prompty.execute(prompt=path_to_prompty, inputs=inputs, parameters=parameters)
If that makes sense, please let me know how we can fix this and when can we PR merge it back to the PromptFlow lib and Prompty VSC extension so I can turn a GenAI mine field walk experience into a rosy path for my customers :))
OK - I think I understand. I think that for the prompty standalone runtime we need a beta invoker for aoai to solve this. In general, though, the args should be passed in and set up correctly in either case (i.e. I would expect a "not a valid arg" type exception).
--- EDIT If you have a PR - would love to see it!
OK - I think I understand. I think that for the prompty standalone runtime we need a beta invoker for aoai to solve this. In general, though, the args should be passed in and set up correctly in either case (i.e. I would expect a "not a valid arg" type exception).
--- EDIT If you have a PR - would love to see it!
https://github.com/microsoft/prompty/pull/93 addresses the type error: "You tried to pass a BaseModel class to chat.completions.create(); You must use beta.chat.completions.parse() instead" when trying to use structured outputs with the given above example.
https://github.com/microsoft/prompty/pull/93 merged.
Thanks for the PR!!