ai
ai copied to clipboard
error Failed to convert the response to stream. Received status code: 400.
- error node_modules/ai/dist/index.mjs (110:10) @ AIStream
- error Failed to convert the response to stream. Received status code: 400.
I'm following the tutorial, but I got this error.
// ./app/api/chat/route.js
import { Configuration, OpenAIApi } from 'openai-edge'
import { OpenAIStream, StreamingTextResponse } from 'ai'
const config = new Configuration({
apiKey: process.env.OPENAI_API_KEY
})
const openai = new OpenAIApi(config)
export const runtime = 'edge'
export async function POST (req) {
const { messages } = await req.json()
console.log(messages)
const response = await openai.createChatCompletion({
model: 'gpt-3.5-turbo',
stream: true,
messages
})
const stream = OpenAIStream(response)
return new StreamingTextResponse(stream)
}
Could you provide more context on the messages? How are you populating it in your page? What is the value being passed / mapped?
I'm recieving a 429 error code, despite this being my first time using the OpenAI API.
Implementing the example set out in the documentation, the messages format is as follows:
[ { role: 'user', content: 'What is mechanical engineering?' } ]
// src/app/api/chat/route.ts
import { OpenAIStream, StreamingTextResponse } from "ai";
import { Configuration, OpenAIApi } from "openai-edge";
const config = new Configuration({
apiKey: process.env.OPENAI_API_KEY,
});
const openai = new OpenAIApi(config);
export const runtime = "edge";
export async function POST(req: Request) {
const { messages } = await req.json();
console.log(messages);
const response = await openai.createChatCompletion({
model: "gpt-3.5-turbo",
stream: true,
messages,
});
const stream = OpenAIStream(response);
return new StreamingTextResponse(stream);
}
// src/app/page.tsx
'use client'
import { useChat } from 'ai/react'
export default function Chat() {
const { messages, input, handleInputChange, handleSubmit } = useChat()
return (
<div className="mx-auto w-full max-w-md py-24 flex flex-col stretch">
{messages.map(m => (
<div key={m.id}>
{m.role === 'user' ? 'User: ' : 'AI: '}
{m.content}
</div>
))}
<form onSubmit={handleSubmit}>
<label>
Say something...
<input
className="fixed w-full max-w-md bottom-0 border border-gray-300 rounded mb-8 shadow-xl p-2"
value={input}
onChange={handleInputChange}
/>
</label>
<button type="submit">Send</button>
</form>
</div>
)
}
After checking the billing page on OpenAI, I discovered that my trial had expired. After attaching billing details and setting usage limits, I created a new API token and I'm now recieving responses from the API.
Currently OpenAIStream(res) throws with this error if res.status isn't 2xx. This means that the response from the AI provider (like OpenAI) was already errored.
As an enhancement, we need to throw with the detailed error message from res to help investigating it more easily. The current difficulty is that const stream = OpenAIStream(res) is synchronous which means we can't do something like throw new Error(await res.text()) there. Need to find a better way.
One idea can be to leverage conditional stream processing within the OpenAIStream function, enhancing its capability to handle non-2xx HTTP responses efficiently. This approach ensures the synchronous nature of the function is preserved.
Outlined Steps:
-
Evaluate Response Status: Utilize
res.okto ascertain if the response status code is within the 2xx range. -
Process Successful Responses: For successful responses, continue with the standard stream processing.
-
Handle Erroneous Responses: For non-2xx responses, create a custom
ReadableStream. Check ifres.bodyis notnull, then asynchronously extract and decode the response body. -
Propagate Detailed Error: Utilize
controller.errorto propagate a detailed error message.
The idea goes as follows:
export function OpenAIStream(
res: Response,
cb?: AIStreamCallbacks
): ReadableStream {
if (res.ok) {
return AIStream(res, parseOpenAIStream(), cb);
} else {
if (res.body) {
const reader = res.body.getReader();
return new ReadableStream({
async start(controller) {
const { done, value } = await reader.read();
if (!done) {
const errorText = new TextDecoder().decode(value);
controller.error(new Error(`Response error: ${errorText}`));
}
}
});
} else {
return new ReadableStream({
start(controller) {
controller.error(new Error('Response error: No response body'));
}
});
}
}
}
This solution integrates asynchronous error handling while maintaining the integrity of the function’s synchronous nature, and facilitates more informative error diagnostics.
okay, i've used wrong message format
change "message" to "content" works
{
"messages": [{ "role": "user", "message": "a slogan for my next app" }]
}
{
"messages": [{ "role": "user", "content": "a slogan for my next app" }]
}
@sirlolcat Love that idea! Would you be willing to open a PR for it?
@shuding It's addressed in #163, let me know if there are any additional changes needed.