ai
ai copied to clipboard
Stream JSON object by object
Hi, I want to thank you for such a great library, it's very cool.
And I would like to ask for help, because I can't figure out how to implement it. I want to create a bot that consists of 4 steps (each step is a different system prompt). The answer of the previous step should be the input of the next step. My problem is that, for example, the answer of the first step is an array of objects in JSON format. And I can't implement it to stream one object (element of the array) at a time, I wanted to accumulate the stream until the entire object is in buffer, make it human readable, and not output JSON, and output that object in human readable format, and so on for each element in the JSON array.
Thanks!
I wanted to accumulate the stream until the entire object is in buffer, make it human readable, and not output JSON,
Do you just want to "await" the call until the end? It sounds like streaming might not be the right answer.
Otherwise, this looks like it might be useful https://js.langchain.com/docs/modules/chains/sequential_chain
LLM to Generates play with title (input)
// This is an LLMChain to write a synopsis given a title of a play.
const llm = new OpenAI({ temperature: 0 });
const template = `You are a playwright. Given the title of play, it is your job to write a synopsis for that title.
Title: {title}
Playwright: This is a synopsis for the above play:`;
const promptTemplate = new PromptTemplate({
template,
inputVariables: ["title"],
});
const synopsisChain = new LLMChain({ llm, prompt: promptTemplate });
Critiques the play with input (synopsis from previous LLM)
// This is an LLMChain to write a review of a play given a synopsis.
const reviewLLM = new OpenAI({ temperature: 0 });
const reviewTemplate = `You are a play critic from the New York Times. Given the synopsis of play, it is your job to write a review for that play.
Play Synopsis:
{synopsis}
Review from a New York Times play critic of the above play:`;
const reviewPromptTemplate = new PromptTemplate({
template: reviewTemplate,
inputVariables: ["synopsis"],
});
Runs the chain
new SimpleSequentialChain({
chains: [synopsisChain, reviewChain],
verbose: true,
});
overallChain.call("Tragedy at sunset on the beach", [handlers]);
Then, you can add streaming to either llm
, reviewLLM
(or both to get the intermediate result) by adding streaming: true
https://github.com/vercel-labs/ai/issues/205#issuecomment-1603437504 shows how chains work.
The limitation streaming will probably be that you'll still only get a string at the end, not the structured response like a JSON response.
My prompt gives me arrays of objects in JSON format. For example:
{
"array": [
{
"title": "text",
"description": "text"
},
{
"title": "text",
"description": "text"
}
]
}
What I want to do, is to stream JSON, take each object in the array, transform that JSON object into text (Title: text\n Description: text) and show transformed object to the user. I want to stream this because of the time it takes to generate, since response can consist of more that 10 elements, it takes a lot of time to just await when I will get entire response (I use gpt-4). I hope I have made myself clear.
Is there a reason the input is an array? In theory, could it be a single element? Or is there some dependency on the previous object?
It seems like you may just want to have a n
different connections (api calls) open and streaming to get each of the elements in the array.
That would reduce the problem to "I have an input, I need to transform it, and stream the response"
The use chat hook might not be the best in your situation, but you can probably use useSWR and treat it as a single element
I generate an array of 3 objects (3 ideas). User should choose one idea and pass that idea to the next input (next system prompt, step, you can call it whatever you want). GPT generates that ideas as an JSON array, but I want to format that JSON like this: Idea title: 'text'. Description: 'text' and show like this in the chat.
Hey @panilya , how did you solve this?
Hi @camro, I generate plain text instead, not JSON. When the user clicks on "go to next step", I take the last message from the messages
array, clear the entire messages
array, and send a new request to the chat
endpoint, where the input is the message I selected from the messages
array.
If you're using useChat
and react, you can use the experimental_StreamData
API as well: https://sdk.vercel.ai/docs/api-reference/stream-data
Hi @panilya , Were you able to pass data from one tool to the other using nextjs vercel ai sdk? if yes please share how