ai icon indicating copy to clipboard operation
ai copied to clipboard

error Failed to convert the response to stream. Received status code: 400.

Open crapthings opened this issue 2 years ago • 8 comments

  • error node_modules/ai/dist/index.mjs (110:10) @ AIStream
  • error Failed to convert the response to stream. Received status code: 400.

I'm following the tutorial, but I got this error.

// ./app/api/chat/route.js
import { Configuration, OpenAIApi } from 'openai-edge'
import { OpenAIStream, StreamingTextResponse } from 'ai'

const config = new Configuration({
  apiKey: process.env.OPENAI_API_KEY
})
const openai = new OpenAIApi(config)

export const runtime = 'edge'

export async function POST (req) {
  const { messages } = await req.json()
  console.log(messages)
  const response = await openai.createChatCompletion({
    model: 'gpt-3.5-turbo',
    stream: true,
    messages
  })
  const stream = OpenAIStream(response)
  return new StreamingTextResponse(stream)
}

crapthings avatar Jun 18 '23 09:06 crapthings

Could you provide more context on the messages? How are you populating it in your page? What is the value being passed / mapped?

0x5844 avatar Jun 18 '23 10:06 0x5844

I'm recieving a 429 error code, despite this being my first time using the OpenAI API.

Implementing the example set out in the documentation, the messages format is as follows:

[ { role: 'user', content: 'What is mechanical engineering?' } ]
// src/app/api/chat/route.ts
import { OpenAIStream, StreamingTextResponse } from "ai";
import { Configuration, OpenAIApi } from "openai-edge";

const config = new Configuration({
	apiKey: process.env.OPENAI_API_KEY,
});
const openai = new OpenAIApi(config);

export const runtime = "edge";

export async function POST(req: Request) {
	const { messages } = await req.json();
	console.log(messages);
	const response = await openai.createChatCompletion({
		model: "gpt-3.5-turbo",
		stream: true,
		messages,
	});

	const stream = OpenAIStream(response);
	return new StreamingTextResponse(stream);
}
// src/app/page.tsx
'use client'
 
import { useChat } from 'ai/react'
 
export default function Chat() {
  const { messages, input, handleInputChange, handleSubmit } = useChat()
 
  return (
    <div className="mx-auto w-full max-w-md py-24 flex flex-col stretch">
      {messages.map(m => (
        <div key={m.id}>
          {m.role === 'user' ? 'User: ' : 'AI: '}
          {m.content}
        </div>
      ))}
 
      <form onSubmit={handleSubmit}>
        <label>
          Say something...
          <input
            className="fixed w-full max-w-md bottom-0 border border-gray-300 rounded mb-8 shadow-xl p-2"
            value={input}
            onChange={handleInputChange}
          />
        </label>
        <button type="submit">Send</button>
      </form>
    </div>
  )
}

karlbateman avatar Jun 18 '23 12:06 karlbateman

After checking the billing page on OpenAI, I discovered that my trial had expired. After attaching billing details and setting usage limits, I created a new API token and I'm now recieving responses from the API.

karlbateman avatar Jun 18 '23 14:06 karlbateman

Currently OpenAIStream(res) throws with this error if res.status isn't 2xx. This means that the response from the AI provider (like OpenAI) was already errored.

As an enhancement, we need to throw with the detailed error message from res to help investigating it more easily. The current difficulty is that const stream = OpenAIStream(res) is synchronous which means we can't do something like throw new Error(await res.text()) there. Need to find a better way.

shuding avatar Jun 18 '23 15:06 shuding

One idea can be to leverage conditional stream processing within the OpenAIStream function, enhancing its capability to handle non-2xx HTTP responses efficiently. This approach ensures the synchronous nature of the function is preserved.

Outlined Steps:

  • Evaluate Response Status: Utilize res.ok to ascertain if the response status code is within the 2xx range.

  • Process Successful Responses: For successful responses, continue with the standard stream processing.

  • Handle Erroneous Responses: For non-2xx responses, create a custom ReadableStream. Check if res.body is not null, then asynchronously extract and decode the response body.

  • Propagate Detailed Error: Utilize controller.error to propagate a detailed error message.

The idea goes as follows:

export function OpenAIStream(
  res: Response,
  cb?: AIStreamCallbacks
): ReadableStream {
  if (res.ok) {
    return AIStream(res, parseOpenAIStream(), cb);
  } else {
    if (res.body) {
      const reader = res.body.getReader();
      return new ReadableStream({
        async start(controller) {
          const { done, value } = await reader.read();
          if (!done) {
            const errorText = new TextDecoder().decode(value);
            controller.error(new Error(`Response error: ${errorText}`));
          }
        }
      });
    } else {
      return new ReadableStream({
        start(controller) {
          controller.error(new Error('Response error: No response body'));
        }
      });
    }
  }
}

This solution integrates asynchronous error handling while maintaining the integrity of the function’s synchronous nature, and facilitates more informative error diagnostics.

0x5844 avatar Jun 18 '23 16:06 0x5844

okay, i've used wrong message format

change "message" to "content" works

{
	"messages": [{ "role": "user", "message": "a slogan for my next app" }]
}
{
	"messages": [{ "role": "user", "content": "a slogan for my next app" }]
}

crapthings avatar Jun 19 '23 02:06 crapthings

@sirlolcat Love that idea! Would you be willing to open a PR for it?

shuding avatar Jun 19 '23 06:06 shuding

@shuding It's addressed in #163, let me know if there are any additional changes needed.

0x5844 avatar Jun 19 '23 14:06 0x5844