generateObject for openAICompatible models
Feature Description
In documentation it is pointed out that only generateText (and streaming) and embedding is supported when using openAICompatible models (like local models using LMStudio): https://sdk.vercel.ai/providers/openai-compatible-providers
LMStudio provides the possibility to restrict the generated output to a strict schema. When using the JS sdk from lmstudio, zod can be used to define the schema to adhere to: https://lmstudio.ai/docs/typescript/llm-prediction/structured-response
Is this something that is on the development roadmap for the openAiCompatible provider? Or is it better better to write my own custom provider for LM Studio if I want to used structured output from LM Studio models?
Use Cases
When running a local model using LM Studio and you still want to be able to generate (strictly) structured output using the generateObject function.
Additional context
No response
Have you tried it? it might just work.
Yes, I have. I get errors like: "No object generated: the tool was not called."
When I look at the logs in LMStudio, I see that responses are being generated for the generateObject calls that are being mapped to openIAcompatible requests. But they dont adhere to strictly to the schema:
...
"message": {
"role": "assistant",
"content": "Here is the parsed email:\n\n**From:**\n- Name: Julio and"
...
So I'm guessing that the available structured response features are not being used, and the zod schema is not being used.
Can you point me to the source code where a generateObject call is mapped to the openAiCompatible request?
Have you tried setting the mode to json?
https://sdk.vercel.ai/docs/ai-sdk-core/generating-structured-data#generation-mode
I hadn't.
I now tried setting the mode explicitly to json, avoiding leaving it to the provider to infer the mode.
This results in a Bad Request (status code 400) with error message: response_format.type' must be 'json_schema'
AI_APICallError: Bad Request
cause: undefined,
url: "http://localhost:1234/v1/chat/completions",
requestBodyValues: {
model: "mistral-nemo-instruct-2407",
user: undefined,
max_tokens: undefined,
temperature: 0,
top_p: undefined,
frequency_penalty: undefined,
presence_penalty: undefined,
response_format: [Object ...],
stop: undefined,
seed: undefined,
messages: [
[Object ...], [Object ...], [Object ...]
],
},
statusCode: 400,
responseHeaders: {
connection: "keep-alive",
"content-length": "56",
"content-type": "application/json; charset=utf-8",
date: "Thu, 13 Mar 2025 18:26:08 GMT",
etag: "W/\"38-VTlHjOuX6WGeUOPvg9RslR1AhrE\"",
"keep-alive": "timeout=5",
"x-powered-by": "Express",
},
responseBody: "{\"error\":\"'response_format.type' must be 'json_schema'\"}",
isRetryable: false,
data: undefined,
vercel.ai.error: true,
vercel.ai.error.AI_APICallError: true,
after further investigation, the response_format.type that is being set by the openAiCompatibility adaptation is json_object while LM Studio requires it to be json_schema?
I can see that the chatModel needs to be configured with supportStructuredOutputs to be set to true (https://github.com/vercel/ai/blob/e462d108de507e45f2fe08eb7ba5686166d44dbc/packages/openai-compatible/src/openai-compatible-chat-language-model.ts#L194)
This config can be set when creating the OpenAICompatibleChatLanguageModel using its constructor as the 3rd param (https://github.com/vercel/ai/blob/e462d108de507e45f2fe08eb7ba5686166d44dbc/packages/openai-compatible/src/openai-compatible-chat-language-model.ts#L87)
But there doesnt seem to be a manner in overriding this settings, when creating the provider: https://github.com/vercel/ai/blob/e462d108de507e45f2fe08eb7ba5686166d44dbc/packages/openai-compatible/src/openai-compatible-provider.ts#L132
so the reason is that LM Studio's API is not fully openai compatible when it comes to this detail?
Two solution: a) file a bug report with LM studio b) we can write a custom lm studio adapter that gets this right (seems off tho if they claim openai compatibility)
Sorry for my late response.
If I understand correctly, after creating the openAICompatible provider, you can instantiate a certain model using the model identifier. Since every model has its own capabilities, it makes sense to allow for configuration of the model specifics. One of these specifics is if the model supports structuredObject output using a schema.
Currently, the openAiCompatible provider does not allow for providing these specifics.
Since the default for the configuration of the parameter supportsStructuredOutputs is false and there is no way in overriding this default config, the openAiCompatible provider will never add the given scheme to the requests on the LMStudio server. So, It seems to me that LMStudio is fully compatible with openai, but there is no way in configuring a given model to allow for structured output.
The PR to allow for this feature seems fairly easy:
- when creating a model from the provider, not just allow for the modelId and the settings, but also allow for model configuration
- merge this model configuration with the default configuration when creating the underlying
chatLanguageModel,completionModelandembeddingModelmodels.
Shall I start this PR? Do you have any guidelines available when contributing
@lgrammel I can confirm that when setting the supportStructuredOutputs to true, it works as expected. The openAiCompatibility adapter (when also mode set to json) then sends the schema to adhere to and the llm in lmstudio returns a json output as expected.
I will now create the PR to be able to override the config for the model when creating the chat/language model
btw, I was able to workaround the error "the tool was not called" by explicitly providing the following system message when prompting lmstudio:
const result = await generateObject<T>({
model: mistral24b,
prompt: opts.prompt,
schema: opts.schema,
system: 'Please generate only the JSON output. DO NOT provide any preamble.'
});
is there a workaround for this, i've been trying to get this to work, I've already tried manually patching out and adding in supportsStructuredOutputs: true into the compiled JS and that got me further with this error:
Invalid tool_choice type: 'object'. Supported string values: none, auto, required
is there a workaround for this, i've been trying to get this to work,
You can manually create the OpenApiCompatableChatLanguageModel, until the supportsStructuredOutputs parameter is added :
const model: LanguageModelV1 = new OpenAICompatibleChatLanguageModel(modelName, {}, {
provider: `lmstudio.chat`,
url: ({path}) => {
const url = new URL(`http://192.168.1.200:5001/v1${path}`);
return url.toString();
},
headers: () => ({}),
defaultObjectGenerationMode: 'json',
supportsStructuredOutputs: true
})
(this is used as the model param in streamObject)
This is working for me using the current AI SDK v5 (alpha 10):
const model = new OpenAICompatibleChatLanguageModel(
'mistral-small-3.1-24b-instruct-2503',
{ // <-- Second parameter, not third
provider: `lmstudio.chat`,
url: ({ path }) => {
const url = new URL(`http://localhost:1234/v1${path}`)
return url.toString()
},
headers: () => ({}),
supportsStructuredOutputs: true,
// 'defaultObjectGenerationMode' doesn't seem to be a valid option anymore
},
)
const { object, response } = await generateObject({
model,
system: prompt,
maxRetries: 1,
messages: [{ role: 'user', content: 'message content' }],
schema: yourSchema, // <-- zod/v4 works!
mode: 'json', // <-- doesn't work without this
})