[JS] openAICompatible plugin: `NotFoundError: 404 error while getting model`, I have to declare the ModelID 2 times in the `generate` method
Describe the bug I'm working with Docker Model Runner (OpenAI API Compliant) and I need to declare the model id into the config param to get a completion
To Reproduce
import { genkit, modelRef, z } from 'genkit';
import { openAICompatible } from '@genkit-ai/compat-oai';
const ai = genkit({
plugins: [
openAICompatible({
name: 'openai',
apiKey: '',
baseURL: 'http://localhost:12434/engines/v1/',
}),
],
});
const myLocalModel = modelRef({
name: 'openai/ai/qwen2.5:0.5B-F16',
});
const llmResponse = await ai.generate({
model: myLocalModel,
prompt: 'Who is Jean-Luc Picard?',
config: {
temperature: 0.9,
model: 'ai/qwen2.5:0.5B-F16',
},
});
console.log(llmResponse.text);
Expected behavior In theory it shoud be:
const llmResponse = await ai.generate({
model: myLocalModel,
prompt: 'Who is Jean-Luc Picard?',
config: {
temperature: 0.9,
},
});
But if I do not add the modelId to the config object I get this error:
node:internal/modules/run_main:104
triggerUncaughtException(
^
NotFoundError: 404 error while getting model: get model '"qwen2.5:0.5B-F16"': model not found
at APIError.generate (file:///Users/k33g/Library/CloudStorage/Dropbox/genkit/getting-started-with-genkit/1st-steps-with-genkit-js/node_modules/@genkit-ai/compat-oai/node_modules/openai/error.mjs:50:20)
at OpenAI.makeStatusError (file:///Users/k33g/Library/CloudStorage/Dropbox/genkit/getting-started-with-genkit/1st-steps-with-genkit-js/node_modules/@genkit-ai/compat-oai/node_modules/openai/core.mjs:295:25)
at OpenAI.makeRequest (file:///Users/k33g/Library/CloudStorage/Dropbox/genkit/getting-started-with-genkit/1st-steps-with-genkit-js/node_modules/@genkit-ai/compat-oai/node_modules/openai/core.mjs:339:30)
at process.processTicksAndRejections (node:internal/process/task_queues:105:5)
at async /Users/k33g/Library/CloudStorage/Dropbox/genkit/getting-started-with-genkit/1st-steps-with-genkit-js/node_modules/@genkit-ai/compat-oai/lib/model.js:310:18
at async /Users/k33g/Library/CloudStorage/Dropbox/genkit/getting-started-with-genkit/1st-steps-with-genkit-js/node_modules/@genkit-ai/core/lib/action.js:142:27
at async /Users/k33g/Library/CloudStorage/Dropbox/genkit/getting-started-with-genkit/1st-steps-with-genkit-js/node_modules/@genkit-ai/core/lib/tracing/instrumentation.js:75:24
at async runInNewSpan (/Users/k33g/Library/CloudStorage/Dropbox/genkit/getting-started-with-genkit/1st-steps-with-genkit-js/node_modules/@genkit-ai/core/lib/tracing/instrumentation.js:60:10)
at async actionFn.run (/Users/k33g/Library/CloudStorage/Dropbox/genkit/getting-started-with-genkit/1st-steps-with-genkit-js/node_modules/@genkit-ai/core/lib/action.js:101:18)
at async dispatch (/Users/k33g/Library/CloudStorage/Dropbox/genkit/getting-started-with-genkit/1st-steps-with-genkit-js/node_modules/@genkit-ai/core/lib/action.js:52:24) {
status: 404,
headers: {
'content-length': '75',
'content-type': 'text/plain; charset=utf-8',
date: 'Sun, 09 Nov 2025 03:29:37 GMT',
'x-content-type-options': 'nosniff'
},
request_id: undefined,
error: undefined,
code: undefined,
param: undefined,
type: undefined,
traceId: 'e877c66b9ac2aaba996dd3d76be708d0',
ignoreFailedSpan: true
}
If I add it, it works
Screenshots If applicable, add screenshots to help explain your problem.
Runtime (please complete the following information):
- OS: MacOS
- Version Tahoe 26.0.1
Node version
- v23.10.0
GenkitJS version
- v1.22.0
- v1.23.0
Additional context With Genkit Go and Docker Model Runner no issue:
func main() {
ctx := context.Background()
g := genkit.Init(ctx, genkit.WithPlugins(&openai.OpenAI{
APIKey: "tada",
Opts: []option.RequestOption{
option.WithBaseURL("http://localhost:12434/engines/v1/"),
},
}))
resp, err := genkit.Generate(ctx, g,
ai.WithModelName("openai/ai/qwen2.5:0.5B-F16"),
ai.WithMessages(
ai.NewSystemTextMessage("You are the dungeon master of a D&D game."),
ai.NewUserTextMessage("Generate a D&D NPC name."),
),
ai.WithConfig(map[string]any{"temperature": 0.7}),
)
if err != nil {
log.Fatal(err)
}
fmt.Println(resp.Text())
}
The issue still exists with 1.23.0
The issue still exist with 1.24.0
Why I have to specify the model 2 times like this:
const response = await ai.generate({
model: "openai/ai/qwen2.5:0.5B-F16",
prompt: 'Who is Jean-Luc Picard?',
config: {
temperature: 0.9,
model: 'ai/qwen2.5:0.5B-F16',
},
});
Instead of this:
const response = await ai.generate({
model: "openai/ai/qwen2.5:0.5B-F16",
prompt: 'Who is Jean-Luc Picard?',
config: {
temperature: 0.9,
},
});