Run qwen3 coder locally
2025-08-31 Sunday UTC
I finally got moatless tools to run for models that run locally, in this case Qwen3-Coder 30b via LM Studio.
Ollama was not suitable. I used swebench_react for the --flow. I also had to change code in 3 places for this to work.
I don't expect this to be merged without changes, so I'm happy to listen to feedback for this to be suitable to be merged.
I only manage to do the Verify Setup step in README. I haven't run a full evaluation. But it took me a long time to even get to this point, so I will return to make run_evaluation.py work with Qwen3-Coder on LM Studio
Forgot to reply to your discussion post, sorry for that. But good job getting it to work with the non-existent documentation 😅
Model config will read from the file models.json in the directory you set in the env var MOATLESS_DIR. So it might work to put the following in that file:
{
"model_id": "qwen-3-coder",
"model": "openai/qwen3-coder-30b",
"temperature": 0.0,
"max_tokens": 8000,
"timeout": 120.0,
"model_base_url": "http://host.docker.internal:1234/v1",
"model_api_key": "faux_key",
"metadata": null,
"message_cache": true,
"thoughts_in_action": false,
"disable_thoughts": false,
"few_shot_examples": true,
"headers": {},
"params": {},
"merge_same_role_messages": false,
"reasoning_effort": null,
"completion_model_class": "moatless.completion.tool_call.ToolCallCompletionModel"
}
And then run uv run python scripts/docker_run.py --flow swebench_tools --model-id qwen-3-coder --instance-id django__django-11099 --evaluation-name testing_setup