openai-node icon indicating copy to clipboard operation
openai-node copied to clipboard

fetch, ESM & streamCompletion util

Open gfortaine opened this issue 2 years ago • 13 comments

  • chore(upgrade): upgrade to TypeScript 4.9
  • chore(upgrade): enable ESM support
  • chore(upgrade): add streamCompletion helper function (credits to @schnerd & @rauschma)
  • chore(upgrade): update README for fetch support

gfortaine avatar Jan 04 '23 18:01 gfortaine

Related https://github.com/openai/openai-openapi/pull/10

gfortaine avatar Jan 04 '23 18:01 gfortaine

Here is the package 🎉 : https://www.npmjs.com/package/@fortaine/openai

gfortaine avatar Jan 05 '23 00:01 gfortaine

Thank you for all the contributions – just wanted to let you know that we've seen this and will respond soon, things have just been a little busy and having trouble carving out time to fully evaluate all the changes that are included here.

schnerd avatar Jan 18 '23 03:01 schnerd

Thank you for making this PR! Very excited for it, as it will allow the client to be used on Edge compute platforms (e.g. Vercel Edge Functions).

leerob avatar Jan 18 '23 23:01 leerob

I just found this PR and tried to get it working. There are quite a few changes that still need to be documented. For example, authentication seems to be working differently now.

The setup is now:

import {
  createConfiguration,
  OpenAIApi,
} from "openai";

const configuration = createConfiguration({
    authMethods: { apiKeyAuth: { accessToken: process.env.OPENAI_API_KEY } },
  });

  const openai = new OpenAIApi(configuration);

That worked in normal node environments. In edge environments, it throws an error, though:

error - ./node_modules/@fortaine/openai/dist/http/http.js:1:0
Module not found: Package path . is not exported from package node_modules/filedirname (see exports field in node_modules/filedirname/package.json)

All these file-system APIs are not available in an edge environment. So the packages fs, path, filedirname, get-current-line wont be working. Package ajv makes use of some dynamic code eval -> https://nextjs.org/docs/messages/edge-dynamic-code-evaluation

DennisKo avatar Jan 27 '23 11:01 DennisKo

@schnerd @leerob @DennisKo this PR should be fully compatible with Vercel's Edge Runtime now 🚀 A few comments :

  • This package is now pure ESM
  • Ajv SerDes has been removed for the time being and ObjectSerializer has been fixed (the generator was generating "any" types creating some nasty bugs during the serialization process, by example by using an array for prompt in createCompletion)

gfortaine avatar Jan 30 '23 00:01 gfortaine

Really looking forward to this PR - it will enable the openai API to be used in background service workers for web extension!

I feel like bundling isomorphic-fetch and http might bloat the bundle. The ideal setup for me would be to have the constructor allowing a custom axios instance to be passed down.

louisgv avatar Jan 31 '23 06:01 louisgv

This adds a number of dependencies that make this harder to run in other contexts, for example a CloudFlare Worker (which doesn't run in node). Just switching out axios for fetch would be great on its own.

danielrhodes avatar Feb 09 '23 05:02 danielrhodes

Any update here?

Would love to see this merged soon!

cfortuner avatar Feb 20 '23 13:02 cfortuner

Patiently awaiting this ❤️

rogerahuntley avatar Mar 02 '23 03:03 rogerahuntley

Patiently awaiting this ❤️

I have a lazy version of this which is automated: @ericlewis/openai. It is automatically updated every hour. It consists of a VERY simple script, so for transparency:

#!/usr/bin/env bash

touch .npmrc && echo "//registry.npmjs.org/:_authToken=\${NPM_TOKEN}" >> .npmrc

openai_version=$(npm view openai version)
ericlewis_version=$(npm view @ericlewis/openai version)

# Compare the versions and output the result
if [[ "$openai_version" == "$ericlewis_version" ]]; then
  echo "Versions match, skipping."
else
  npm install change-package-name typescript@4 --save-dev
  npm uninstall axios
  npm install redaxios
  npx change-package-name @ericlewis/openai
  
  sed -i "s/import type { AxiosPromise, AxiosInstance, AxiosRequestConfig } from 'axios';/type AxiosPromise<T = any> = Promise<{data: T, status: number, statusText: string, request?: any, headers: any, config: any}>;\ntype AxiosInstance = any;\ntype AxiosRequestConfig = any;\nimport globalAxios from 'redaxios';/g" *.ts
  sed -i "s/import type { AxiosInstance, AxiosResponse } from 'axios';/type AxiosInstance = any;/g" *.ts
  sed -i "s/<T = unknown, R = AxiosResponse<T>>//g" *.ts
  sed -i "s/return axios.request<T, R>(axiosRequestArgs);/return axios.request(axiosRequestArgs);/g" *.ts
  sed -i 's/"target": "es6",/"target": "es2021",\n    "esModuleInterop": true,/g' tsconfig.json
  sed -i '/import globalAxios from '\''axios'\'';/d' *.ts
  
  npm run build
  npm publish
fi

ericlewis avatar Mar 02 '23 17:03 ericlewis

@leerob @DennisKo @louisgv @danielrhodes @cfortuner @rogerahuntley @ericlewis cc @schnerd Here it is 🎉 :

import fetchAdapter from "@haverstack/axios-fetch-adapter";

const configuration = new Configuration({
  apiKey: process.env.OPENAI_API_KEY,
  baseOptions: { 
    adapter: fetchAdapter
  }
});

Various references :

  • Using Ory with Cloudflare Workers cc @hyl

  • https://github.com/openai/openai-node/issues/30#issuecomment-1342350056 cc @aliuq

  • https://github.com/hwchase17/langchainjs/pull/118 cc @nfcampos

gfortaine avatar Mar 06 '23 00:03 gfortaine

import fetchAdapter from "@haverstack/axios-fetch-adapter";

const configuration = new Configuration({
  apiKey: process.env.OPENAI_API_KEY,
  baseOptions: { 
    adapter: fetchAdapter
  }
});

Note: It doesn't seem like @vespaiach/axios-fetch-adapter supports streaming (cc @gfortaine)

dqbd avatar Mar 06 '23 01:03 dqbd

Is there ETA for this ? How can I help moving this forward

s123121 avatar Mar 24 '23 05:03 s123121

This is sorely needed. Can someone speak to what is needed / where help can look to assist?

dustinlacewell avatar Mar 29 '23 06:03 dustinlacewell

@s123121 @dustinlacewell in the meantime you can use openai-edge (supports streaming)

dan-kwiat avatar Mar 31 '23 11:03 dan-kwiat

@s123121 @dustinlacewell in the meantime you can use openai-edge (supports streaming)

It's not helpful for third-party libraries that use openai though. (Though it is very welcome, thanks for making it.)

dustinlacewell avatar Mar 31 '23 15:03 dustinlacewell

@s123121 @dustinlacewell in the meantime you can use openai-edge (supports streaming)

It's not helpful for third-party libraries that use openai though. (Though it is very welcome, thanks for making it.)

About to install and try out this. Are you saying it won't work for non-Next apps like Nuxt, etc?

dosstx avatar May 28 '23 21:05 dosstx

@dosstx did you try it? openai-edge has no dependencies so works anywhere. If you're running in an environment without a global fetch defined, you can pass your own since v0.6:

import { Configuration, OpenAIApi } from "openai-edge"
const configuration = new Configuration({
  apiKey: "your-key-here"
})
const openai = new OpenAIApi(configuration, null, $fetch)

dan-kwiat avatar May 29 '23 12:05 dan-kwiat

@dan-kwiat Using Nuxt and setting up the code like so, I get empty response object back despite status 200:

response from server:

Response {                                                                                                                               7:24:44 AM  
  [Symbol(realm)]: { settingsObject: {} },
  [Symbol(state)]: {
    aborted: false,
    rangeRequested: false,
    timingAllowPassed: false,
    requestIncludesCredentials: false,
                                                                                                                                                  7:26:05 AM  
ℹ hmr update /app.vue                                                                                                                             7:26:05 AM  
✔ Vite server hmr 6 files in 82.896ms                                                                                                             7:26:07 AM
server:  Response {                                                                                                                               7:26:12 AM
  [Symbol(realm)]: { settingsObject: {} },
  [Symbol(state)]: {
    aborted: false,
    rangeRequested: false,
    timingAllowPassed: false,
      [Symbol(headers map sorted)]: null
    },
    urlList: [],
    body: { stream: undefined, source: null, length: null }
  },
  [Symbol(headers)]: HeadersList {
    cookies: null,
    [Symbol(headers map)]: Map(4) {
      'access-control-allow-origin' => [Object],
      'content-type' => [Object],
      'cache-control' => [Object],
      'x-accel-buffering' => [Object]
    },
    [Symbol(headers map sorted)]: null
  }
}

server/api/chat.post.js: <---- the server api route to call the openai api


import { Configuration, OpenAIApi } from "openai-edge"

const rConfig = useRuntimeConfig()

const configuration = new Configuration({
  apiKey: rConfig.OPENAI_API_KEY,
})
const openai = new OpenAIApi(configuration)

export const config = {
  runtime: "edge",
}

export default defineEventHandler(async (event) => {

  try {
    const completion = await openai.createChatCompletion({
      model: "gpt-3.5-turbo",
      messages: [
        { role: "system", content: "You are a helpful assistant." },
        { role: "user", content: "Who won the world series in 2020?" },
        {
          role: "assistant",
          content: "The Los Angeles Dodgers won the World Series in 2020.",
        },
        { role: "user", content: "Where was it played?" },
      ],
      max_tokens: 7,
      temperature: 0,
      stream: true,
    })

    const response = new Response(completion.body, {
      headers: {
        "Access-Control-Allow-Origin": "*",
        "Content-Type": "text/event-stream;charset=utf-8",
        "Cache-Control": "no-cache, no-transform",
        "X-Accel-Buffering": "no",
      },
    })
    console.log('server: ', response)
    return response
  } catch (error: any) {
    console.error(error)

    return new Response(JSON.stringify(error), {
      status: 400,
      headers: {
        "content-type": "application/json",
      },
    })
  }
})

dosstx avatar May 29 '23 12:05 dosstx

@s123121 @dustinlacewell in the meantime you can use openai-edge (supports streaming)

It's not helpful for third-party libraries that use openai though. (Though it is very welcome, thanks for making it.)

About to install and try out this. Are you saying it won't work for non-Next apps like Nuxt, etc?

I was speaking of third-party packages that use openai as a dependency. No good way of forcing those packages to use openai-edge afaik.

dustinlacewell avatar Jun 02 '23 17:06 dustinlacewell

Thank you for putting this together!

Great news; we have a new, fully-rewritten upcoming version v4.0.0 that supports ESM, uses fetch, and has conveniences for streaming (with more coming soon).

Please give it a try and let us know what you think in the linked discussion!

rattrayalex avatar Jul 10 '23 00:07 rattrayalex