Bespoke UI with Structured Output
So besides this work already done:
- https://github.com/metaskills/experts/pull/22
- https://github.com/metaskills/experts/pull/23
Can we do something with the new OpenAI Structured Outputs. See their "Dynamically generating user interfaces based on the user’s intent" section.
https://openai.com/index/introducing-structured-outputs-in-the-api/
Here is a gist of what they showed and how to instruct zod with the new response format helper.
https://gist.github.com/metaskills/2bf51e5f4ed89a5d5e0d3dfe78be0d1f
I think the question is what processes the output and creates styled presentation layer components.
| Landing Page | Sign Up | Stock Price |
|---|---|---|
Their demos do not seem to line up with the shown zod response format. Which is expected. I get the point. But still there are a lot of questions. For example, are we going to throw a bunch of HTML into the context window from a tool output and expect the LLM to add it to a chat completion or assistants response verbatim?
Demonstrate Function Calling & Structured Outputs with Zod - https://github.com/metaskills/experts/commit/ef54d182936f1836c512de440a54baf564ffdd43