Add an API provider abstraction layer to easily support more services/models/APIs
This has been mentioned in other issues in the past:
I wonder if it's worth implementing a wrapper/abstraction layer like LiteLLM to make things more flexible?
- https://github.com/BerriAI/litellm
Call all LLM APIs using the OpenAI format. Use Bedrock, Azure, OpenAI, Cohere, Anthropic, Ollama, Sagemaker, HuggingFace, Replicate (100+ LLMs)
This is what projects like
aideruse:
- https://aider.chat/docs/llms/other.html
Aider uses the litellm package to connect to hundreds of other models. You can use aider
--model <model-name>to use any supported model.To explore the list of supported models you can run
aider --models <model-name>with a partial model name. If the supplied name is not an exact match for a known model, aider will return a list of possible matching models.Originally posted by @0xdevalias in https://github.com/jehna/humanify/issues/14#issuecomment-2179563971
And would help support issues such as the following in a more unified way:
- https://github.com/jehna/humanify/issues/11
- https://github.com/jehna/humanify/issues/14
- https://github.com/jehna/humanify/issues/213
- https://github.com/jehna/humanify/pull/272
- https://github.com/jehna/humanify/issues/84
- https://github.com/jehna/humanify/issues/392
I haven't looked too deeply into all of the options available in this space, but one I re-stumbled across again today that made me think to create this issue:
- https://github.com/vercel/ai
-
The AI Toolkit for TypeScript. From the creators of Next.js, the AI SDK is a free open-source library for building AI-powered applications and agents
- https://github.com/vercel/ai#ai-sdk-core
-
AI SDK Core
-
The AI SDK Core module provides a unified API to interact with model providers like OpenAI, Anthropic, Google, and more.
You will then install the model provider of your choice.
-
-
- https://sdk.vercel.ai/
-
The AI Toolkit for TypeScript
- https://sdk.vercel.ai/docs/foundations/providers-and-models
-
Providers and Models
-
AI SDK Core offers a standardized approach to interacting with LLMs through a language model specification that abstracts differences between providers. This unified interface allows you to switch between providers with ease while using the same API for all providers.
- https://sdk.vercel.ai/docs/foundations/providers-and-models#ai-sdk-providers
-
AI SDK Providers
-
The AI SDK comes with a wide range of providers that you can use to interact with different language models
-
You can also use the OpenAI Compatible provider with OpenAI-compatible APIs
-
Our language model specification is published as an open-source package, which you can use to create custom providers.
-
The open-source community has created the following providers...
-
- https://sdk.vercel.ai/docs/foundations/providers-and-models#self-hosted-models
-
Self-Hosted Models You can access self-hosted models with the following providers...
-
Additionally, any self-hosted provider that supports the OpenAI specification can be used with the OpenAI Compatible Provider .
-
- https://sdk.vercel.ai/docs/foundations/providers-and-models#model-capabilities
-
Model Capabilities The AI providers support different language models with various capabilities. Here are the capabilities of popular models...
-
-
- https://sdk.vercel.ai/docs/ai-sdk-core/generating-structured-data
-
Generating Structured Data
-
- https://sdk.vercel.ai/providers/community-providers/ollama
-
Ollama Provider sgomez/ollama-ai-provider is a community provider that uses Ollama to provide language model support for the AI SDK.
- https://github.com/sgomez/ollama-ai-provider
-
Vercel AI Provider for running LLMs locally using Ollama
-
-
- https://sdk.vercel.ai/providers/community-providers/openrouter
-
OpenRouter
-
OpenRouter is a unified API gateway that provides access to hundreds of AI models from leading providers like Anthropic, Google, Meta, Mistral, and more. The OpenRouter provider for the AI SDK enables seamless integration with all these models while offering unique advantages
-
-
There's also this brief overview I did in a comment RE: wanting a JavaScript version of litellm:
- https://github.com/BerriAI/litellm/issues/361#issuecomment-2822862100
And it may be interesting to look into options/services like this that aim to support choosing/optimising across different providers/models/etc:
- https://www.notdiamond.ai/
-
An end-to-end multi-model framework
-
Intelligent routing Not Diamond can help you take any evaluation data for any set of models over any set of inputs and create an optimal routing algorithm tailored to your application.
-
Automatic prompt adaptation Automatically adapt prompts across LLMs so you always call the right model with the right prompt. No more manual tweaking and experimentation.
- https://www.notdiamond.ai/pricing
-
See Also
- https://github.com/jehna/humanify/issues/14
- https://github.com/jehna/humanify/issues/84
- https://github.com/jehna/humanify/issues/213
- https://github.com/jehna/humanify/issues/392
- https://github.com/jehna/humanify/issues/416
- https://github.com/jehna/humanify/issues/419
- https://github.com/jehna/humanify/issues/481
- https://github.com/jehna/humanify/issues/502
The OpenAI Agents SDK:
- https://github.com/openai/openai-agents-js
-
OpenAI Agents SDK (JavaScript/TypeScript)
-
A lightweight, powerful framework for multi-agent workflows and voice agents
-
The OpenAI Agents SDK is a lightweight yet powerful framework for building multi-agent workflows in JavaScript/TypeScript. It is provider-agnostic, supporting OpenAI APIs and more.
-
- https://openai.github.io/openai-agents-js/
Supports different models / model providers (including non-OpenAI ones):
- https://openai.github.io/openai-agents-js/guides/models/
-
Models
Every Agent ultimately calls an LLM. The SDK abstracts models behind two lightweight interfaces:
Model– knows how to make one request against a specific API.ModelProvider– resolves human‑readable model names (e.g.'gpt‑4o') toModelinstances.
-
- https://openai.github.io/openai-agents-js/guides/models/#custom-model-providers
-
Custom model providers
Implementing your own provider is straightforward – implement
ModelProviderandModeland pass the provider to theRunnerconstructor
-
It looks like we can find theModelProvider / Model classes in @openai/core / @openai/agents-openai / @openai/agents:
- https://openai.github.io/openai-agents-js/openai/agents-core/interfaces/modelprovider/
-
ModelProvider
The base interface for a model provider.
The model provider is responsible for looking up
Modelinstances by name.
-
- https://openai.github.io/openai-agents-js/openai/agents-core/interfaces/model/
-
Model
The base interface for calling an LLM.
-
It looks like we can find the OpenAIProvider / OpenAIChatCompletionsModel / OpenAIResponsesModel classes in @openai/agents-openai / @openai/agents:
- https://openai.github.io/openai-agents-js/openai/agents-openai/classes/openaiprovider/
-
OpenAIProvider
The provider of OpenAI’s models (or Chat Completions compatible ones)
-
- https://openai.github.io/openai-agents-js/openai/agents-openai/classes/openaichatcompletionsmodel/
-
OpenAIChatCompletionsModel
A model that uses (or is compatible with) OpenAI’s Chat Completions API.
-
- https://openai.github.io/openai-agents-js/openai/agents-openai/classes/openairesponsesmodel/
-
OpenAIResponsesModel
Model implementation that uses OpenAI’s Responses API to generate responses.
-
It looks like we can find adapters for the Vercel AI SDK in @openai/agents-extensions:
- https://openai.github.io/openai-agents-js/openai/agents-extensions/classes/aisdkmodel/
-
AiSdkModel
Wraps a model from the AI SDK that adheres to the LanguageModelV2 spec to be used used as a model in the OpenAI Agents SDK to use other models.
While you can use this with the OpenAI models, it is recommended to use the default OpenAI model provider instead.
If tracing is enabled, the model will send generation spans to your traces processor.
- https://ai-sdk.dev/docs/foundations/providers-and-models#ai-sdk-providers
-
AI SDK Providers
The AI SDK comes with a wide range of providers that you can use to interact with different language models:
-
- https://ai-sdk.dev/providers/community-providers/custom-providers
-
Writing a Custom Provider
The AI SDK provides a Language Model Specification that enables you to create custom providers compatible with the AI SDK. This specification ensures consistency across different providers.
- https://github.com/vercel/ai/tree/main/packages/provider/src/language-model/v2
-
-
The Vercel AI SDK v5 supports the concept of a Global Provider:
- https://vercel.com/blog/ai-sdk-5#global-provider
-
Global Provider
The AI SDK 5 includes a global provider feature that allows you to specify a model using just a plain model ID string:
import { streamText } from 'ai'; const result = await streamText({ model: 'openai/gpt-4o', // Uses the global provider (defaults to AI Gateway) prompt: 'Invent a new holiday and describe its traditions.', });By default, the global provider is set to the Vercel AI Gateway.
-
Customizing the Global Provider
You can set your own preferred global provider:
import { openai } from '@ai-sdk/openai'; import { streamText } from 'ai'; // Initialise once during startup: globalThis.AI_SDK_DEFAULT_PROVIDER = openai; // Somewhere else in your codebase: const result = streamText({ model: 'gpt-4o', // Uses OpenAI provider without prefix prompt: 'Invent a new holiday and describe its traditions.', });This simplifies provider usage and makes it easier to switch between providers without changing your model references throughout your codebase.
-
Which defaults to using the Vercel AI Gateway, which you can read more about here:
- https://vercel.com/ai-gateway
-
The AI Gateway For Developers
Robustly access hundreds of AI models through a centralized interface and ship with ease.
-
Can I try the Gateway for free?
Yes! When you sign up for a Vercel account, you get $5 of credits every 30 days to try out any model from our model list. We don’t restrict access to premium models.
Note: After you make your first payment, you are considered a paid customer and will no longer receive the free credits.
-
How is the Gateway priced?
We offer tokens at list price from the upstream providers with no markup.
If you bring your own key, we will not add any markup to your token price (0%).
You are responsible for any payment processing fees.
-
Do you have any rate limits?
While the upstream providers may have limits, Vercel doesn’t place any rate limits on your queries.
We are constantly working with the upstream providers to get you the maximum limits, throughput and reliability.
- https://vercel.com/docs/ai-gateway
-
AI Gateway
-
The AI Gateway provides a unified API to access hundreds of models through a single endpoint. It gives you the ability to set budgets, monitor usage, load-balance requests, and manage fallbacks.
The design allows it to work seamlessly with AI SDK 5, OpenAI SDK, or your preferred framework.
-
- https://vercel.com/ai-gateway/models
-
Browse AI Gateway Models
A catalog of models to help you build AI features for your Vercel project.
-
- https://vercel.com/docs/ai-gateway/byok
-
Bring Your Own Key (BYOK)
Using your own credentials with an external AI provider allows AI Gateway to authenticate requests on your behalf with no added markup. This approach is useful for utilizing credits provided by the AI provider or executing AI queries that access private cloud data. If a query using your credentials fails, AI Gateway will retry the query with its system credentials to improve service availability.
-
- https://vercel.com/docs/ai-gateway/app-attribution
-
App Attribution
App attribution allows Vercel to identify the application making a request through AI Gateway. When provided, your app can be featured on AI Gateway pages, driving awareness.
-
-