prompty icon indicating copy to clipboard operation
prompty copied to clipboard

Support for structured outputs or function calling

Open vincentgiraud opened this issue 1 year ago • 8 comments

Is there any plan to support structured output as response format or function calling in Prompty?

image

image

vincentgiraud avatar Aug 20 '24 16:08 vincentgiraud

function calling is supported, here's the example.

https://github.com/Azure-Samples/contoso-creative-writer/blob/fe9478e42d4f190b820bd491651e7df2c35adb35/src/api/agents/researcher/researcher.prompty#L15

We basically use this syntax to load json file.

${file:functions.json}

I haven't tried the full response_format but I think it should work too.

Any extra model configuration parameters can also be passed in this way. So give this a try:

model:
  ...
  parameters:
    max_tokens: 1024
    response_format: ${file:xxx.json}

wayliums avatar Aug 22 '24 04:08 wayliums

Thanks @wayliums.

vincentgiraud avatar Aug 22 '24 09:08 vincentgiraud

function calling is supported, here's the example.

https://github.com/Azure-Samples/contoso-creative-writer/blob/fe9478e42d4f190b820bd491651e7df2c35adb35/src/api/agents/researcher/researcher.prompty#L15

We basically use this syntax to load json file.

${file:functions.json}

I haven't tried the full response_format but I think it should work too.

Any extra model configuration parameters can also be passed in this way. So give this a try:

model:
  ...
  parameters:
    max_tokens: 1024
    response_format: ${file:xxx.json}

Hey @wayliums, structured outputs won't work: "Invalid value: 'MyBaseModel'. Supported values are: 'json_object' and 'text'." I created a pydantic BaseModel but Prompty only care about the 2 above types.

vincentgiraud avatar Sep 04 '24 07:09 vincentgiraud

@sethjuarez and @wayliums are you taking PR? I've created a local branch with a fix candidate as I found the root cause and a workable example.

vincentgiraud avatar Sep 05 '24 17:09 vincentgiraud

In general, we basically **kwargs in model params and configuration. Is the run failing in the runtime or the extension? I know we use function calling in other samples. As far as the response format is concerned - yeah, I'm not sure we are enforcing it "quite" yet in the processor Invokers. Would love to see what you built for it!

sethjuarez avatar Sep 05 '24 18:09 sethjuarez

In general, we basically **kwargs in model params and configuration. Is the run failing in the runtime or the extension? I know we use function calling in other samples. As far as the response format is concerned - yeah, I'm not sure we are enforcing it "quite" yet in the processor Invokers. Would love to see what you built for it!

I got inspiration from these other samples you shared to do my function calling but it's failing in the runtime and the extension due to 2 problems:

  1. Both VSC extension and standalone Prompty need to upgrade the Azure OpenAI executor to v1.43.0 and invoke AzureOpenAI.beta.chat.completions.parse for the structured output to work
  2. Parsing of the json files: response_format: ${file:structure.json} and tools: ${file:functions.json} by Prompty VSC extension didn't make the BaseModel class and the required "strict": True and "additionalProperties": False function parameters to be interpreted as text instead of respectively Pydantic object and a boolean. So to use structured outputs and tools I override/merge parameters in my PromptFlow flow code (won't work in Prompty VSC extension) using standalone Prompty runtime:
...
parameters = {
      "response_format": MyPydanticBaseModel,
      "tools": [{
              "type": "function",
              "function": {
                "name": "my-function",
                "strict": True,
                ...
                "additionalProperties": False,
               }
            }
          }]
}
result = prompty.execute(prompt=path_to_prompty, inputs=inputs, parameters=parameters)

If that makes sense, please let me know how we can fix this and when can we PR merge it back to the PromptFlow lib and Prompty VSC extension so I can turn a GenAI mine field walk experience into a rosy path for my customers :))

vincentgiraud avatar Sep 06 '24 09:09 vincentgiraud

OK - I think I understand. I think that for the prompty standalone runtime we need a beta invoker for aoai to solve this. In general, though, the args should be passed in and set up correctly in either case (i.e. I would expect a "not a valid arg" type exception).

--- EDIT If you have a PR - would love to see it!

sethjuarez avatar Sep 06 '24 18:09 sethjuarez

OK - I think I understand. I think that for the prompty standalone runtime we need a beta invoker for aoai to solve this. In general, though, the args should be passed in and set up correctly in either case (i.e. I would expect a "not a valid arg" type exception).

--- EDIT If you have a PR - would love to see it!

https://github.com/microsoft/prompty/pull/93 addresses the type error: "You tried to pass a BaseModel class to chat.completions.create(); You must use beta.chat.completions.parse() instead" when trying to use structured outputs with the given above example.

vincentgiraud avatar Sep 09 '24 08:09 vincentgiraud

https://github.com/microsoft/prompty/pull/93 merged.

vincentgiraud avatar Oct 29 '24 16:10 vincentgiraud

Thanks for the PR!!

sethjuarez avatar Oct 29 '24 17:10 sethjuarez