Programmatic access to the model capabilities and provider listings
Feature Description
Please allow the ability to:
-
query the model capabilities, such as the
maximum token limit,image input supportetc. -
list all the providers and their supported models, along with their default configuration values etc.
through programmatic means.
Use Case
To build UIs that allow switching the models dynamically from the client side, we would need to query the model capabilities and the model listings.
For example, consider the below screenshot from the Playground:
Additional context
Could not find any API to programmatically list the models or access their capabilities in the documentation. Adding such API helps in dynamically switching the models from the client side.
+1 Not having provider.listModels() is a big missing feature for me.
Agreed! Big miss.
I looked into the @ai-sdk/openai package and it looks like there is a OpenAIChatModelId type that could be inferred from a readonly array (with as const).
Something like this: https://github.com/vidup/vercel-ai/blob/90b867f2963b43942d5a2320c2b5d4ef5683592e/packages/openai/src/openai-chat-settings.ts#L2 (or to test it directly: https://github.dev/vidup/vercel-ai/blob/90b867f2963b43942d5a2320c2b5d4ef5683592e/packages/openai/src/openai-chat-settings.ts#L2)
What do you think ?
EDIT: The google package works in similar ways, so I guess they all are. I can make a PR if needed.
@vidup Do it. Although, I think an API call for the latest & greatest model like my code below is better. I have a refetch button above my select/option to fetch newest models.
I also agree on .listModels() feature. It is too complicated right now. Only openrouter, gemini work for my code (although gemini isn't returning 2.5 models for some reason). Maybe lmstudio works too (i haven't tested).
import ky from 'ky'
import { APIProvider } from '@/shared/types'
type FetchModelsOptions = {
providerName: APIProvider
apiKey: string
}
export async function fetchModels({
providerName,
apiKey,
}: FetchModelsOptions) {
let apiUrl = ''
let headers: Record<string, string> = {}
let parseModels = (result: any): string[] => []
switch (providerName) {
case 'openrouter':
apiUrl = 'https://openrouter.ai/api/v1/models'
headers = { Authorization: `Bearer ${apiKey}` }
parseModels = (result) => result?.data?.map((m: any) => m.id) || []
break
case 'openai':
apiUrl = 'https://api.openai.com/v1/models'
headers = { Authorization: `Bearer ${apiKey}` }
parseModels = (result) => result?.data?.map((m: any) => m.id) || []
break
case 'anthropic':
apiUrl = 'https://api.anthropic.com/v1/models'
headers = {
'x-api-key': apiKey,
'content-type': 'application/json',
}
parseModels = (result) => result?.data?.map((m: any) => m.id) || []
break
case 'gemini':
apiUrl = `https://generativelanguage.googleapis.com/v1/models?key=${apiKey}`
headers = {}
parseModels = (result) =>
result?.models?.map((m: any) => m.name || m.id) || []
break
case 'lmstudio':
apiUrl = 'http://localhost:1234/v1/models'
headers = { Authorization: `Bearer ${apiKey}` }
parseModels = (result) => result?.data?.map((m: any) => m.id) || []
break
default:
console.warn('No API fetch implemented for provider', providerName)
return []
}
try {
const result = await ky.get(apiUrl, { headers }).json()
return parseModels(result)
} catch (err) {
console.error(`Error fetching models for provider ${providerName}:`, err)
return []
}
}
@deadcoder0904 this is indeed a useful function to retrieve the latest models at a given point in time, but I would advise against using it directly in a production UI, unless you're certain the behavior is fully understood and controlled.
The main issue is that it tightly couples your software to the availability and stability of third-party APIs. If any of those APIs are down, misconfigured, or rate-limited, the software relying on fetchModels might break unexpectedly. This would likely be surprising to most users of the package, who expect more predictability and resilience.
Additionally, dynamically changing the list of available models at runtime could lead to unintended consequences. For example, if a newer, much more expensive model becomes available and is surfaced automatically in a UI, users might select it unknowingly. Considering the very high difference of inference cost between models, it might incur some unexpected expenses.
There's also the matter of how models are organized in existing packages. For example, in this package you can find categories of models like chat, image, audio, etc. A raw list from an API won't reflect this structure, which could lead to invalid behavior such as trying to run a text-generation function on an image-only model. I think it should at least mirror the package structure.
That said, your function can absolutely be useful internally as a tool to periodically update the model list exposed by each package. I guess it could be automated (e.g., via CI/CD) but should include human review to avoid any accident.
Something like this: https://github.com/vidup/vercel-ai/blob/90b867f2963b43942d5a2320c2b5d4ef5683592e/packages/openai/src/openai-chat-settings.ts#L2 (or to test it directly: https://github.dev/vidup/vercel-ai/blob/90b867f2963b43942d5a2320c2b5d4ef5683592e/packages/openai/src/openai-chat-settings.ts#L2)
Thanks @vidup for sharing this 👍
Also stumbled on this, and it seems like something like the following could work?
export const OPENAI_CHAT_MODEL_IDS = [
// list trimmed for the example
'gpt-5',
'gpt-5-2025-08-07',
'gpt-5-mini',
'gpt-5-mini-2025-08-07',
'gpt-5-nano',
'gpt-5-nano-2025-08-07',
'gpt-5-chat-latest'
] as const;
export type OpenAIChatModelId =
| (typeof OPENAI_CHAT_MODEL_IDS)[number]
| (string & {});
And would also apply to other providers.
Briefly looking at the repo, such change would likely be fine and not add maintenance burden, since adding or removing a model would be about editing a list, like before.
@vidup just checking if you were you still planning to open a PR to explore this approach?
Hey @jtpio :)
I can open a first PR this week. Before I do, a couple of scope/API decisions to work out:
- What to export first. The arrays seem to be exported and used in the rest of their respective packages already so I believe we should make them the root of any possible details we add afterwards.
// openai package
export const OPENAI_CHAT_MODEL_IDS = ['gpt-5', /* ... */] as const;
export type OpenAIChatModelId = (typeof OPENAI_CHAT_MODEL_IDS)[number];
// google package
export const GOOGLE_CHAT_MODEL_IDS = ['gemini-2.5-pro', /* ... */] as const;
export type GoogleChatModelId = (typeof GOOGLE_CHAT_MODEL_IDS)[number];
There are a lot of providers, so I'll probably open the PR with these two first. I noticed there are other files with different model types (image, etc...) that would require the same treatment.
- As for capabilities, I’d propose a non-breaking follow-up in the PR (rather than implementing it right now, considering the maintenance burden it probably represents) that keeps IDs as the stable public API and adds an optional map with conservative fields:
export interface ModelInfo {
modality: 'chat' | 'image' | 'audio';
maxInputTokens?: number;
maxOutputTokens?: number;
supports?: {
tools?: boolean;
images?: boolean;
streaming?: boolean;
thinking?: 'none' | 'optional' | 'required';
};
pricingPerMTokUSD?: { input: number; output: number; thinking?: number };
}
export const openaiModelInfo: Partial<Record<OpenAIChatModelId, ModelInfo>> = {
'gpt-5': {
modality: 'chat',
supports: { tools: true, streaming: true, thinking: 'optional' },
},
};
What do you think ? In the end it's in the authors hands anyway :)