OpenAI icon indicating copy to clipboard operation
OpenAI copied to clipboard

Swift community driven package for OpenAI public API

Results 80 OpenAI issues
Sort by recently updated
recently updated
newest added

I'd like a way to retrieve a log of previous chats. Anyway thanks for this great api!

The `host` that you supply to `OpenAI.Configuration` is converted into a URL by `buildURL`, which has the scheme hard-coded to `https`. With projects like GPT4All allowing users to run a...

The OpenAI API seems to allow both strings and arrays of strings in order to [batch requests](https://platform.openai.com/docs/guides/rate-limits/batching-requests): Here’s the canonical example from the docs using Python: ``` import openai #...

**Is your feature request related to a problem? Please describe.** Right, it's pretty nice we can init a OpenAI with custom host, like ``` let openAI = OpenAI(configuration: OpenAI.Configuration( token:...

**Is your feature request related to a problem? Please describe.** The Demo App is very Apple-like and I think it would not only be nice to give it some character,...

**Is your feature request related to a problem? Please describe.** The demo app includes a chat example; audio and image generation remains TBD. **Describe the solution you'd like** Using the...

help wanted

Extend the library by adding fine-tuning capabilities. Add `files` and `fine-tunes` API calls so the library supports model tuning and file retrieval. Documentation * [Files](https://platform.openai.com/docs/api-reference/files) * [Fine-Tunes](https://platform.openai.com/docs/api-reference/fine-tunes)

enhancement
help wanted

## What Added a feature to retrieve token usage in stream chat mode. ## Why This functionality is available in the original API and could be useful for some users....

## What This PR adds fully customizable endpoint support. Instead of "just" adding `basePath` you can now pass a fully configured endpoint URL. ## Why Internal/hosted OpenAI Instances sometimes use...

When I want to connect to an OpenAI compatible proxy server (i.e. litellm), it does not accept requests including `/v1` in the URL. - This works `http://localhost:8000/chat/completion` - This does...