avesed
avesed
> The build wiring still could use a little work. What version of the Vulkan SDK do you have installed? Can you share the output of your `cmake -B ...`...
> Your Vulkan version looks good. What DLLs are present in `C:\ai\ollama-0.12.6\build\lib\ollama` after you build? I rebuilt it using 0.12.10, Folder contained: ``` ggml-vulkan.dll ggml-cpu-x64.dll ggml-cpu-sse42.dll ggml-cpu-skylakex.dll ggml-cpu-sandybridge.dll ggml-cpu-icelake.dll ggml-cpu-haswell.dll...
> Vulkan is now built in for 0.12.11 and requires setting a variable to enable. OLLAMA_VULKAN=1 Please give it a try. No luck. It was the same behavior. Ollama can...
So I did a bit of debugging myself, it seems like *.dll files are not loading? I can confirm that VC++ and the Vulkan SDK are installed properly, and dumpbin...
> [@avesed](https://github.com/avesed) are you by any chance running Windows Enterprise 25H2? If you build from source from main and set OLLAMA_DEBUG="2" do you see the same `The specified procedure could...
> Thanks for confirming. We'll try to repro and figure out what's causing this. Ok, thank you.
@dhiltgen Thank you for all the help. I fixed it. So, this system had an Intel Arc B580 and was running an Intel version of Ollama, and turns out it...
Adding ``` "store": false ``` fixes the first issue, but it is still hitting the context limit. I have to manually do /compact to continue.
> [@avesed](https://github.com/avesed) can u do `opencode export > session.json` (this is a command to run in ur terminal and will output the session.json file) can u send me a file...
@280601922 我的格式是这样的,加limit是因为有时候我会触发issue里的第2个报错,改完之后我确实不会有报错了,但是token用的非常快(尤其如果你用的是反代,直接登录还好一点),建议小心使用 ``` { "$schema": "https://opencode.ai/config.json", "provider": { "openai": { "options": { "baseURL": "", "apiKey": "" }, "models": { "gpt-5.2-codex": { "options": { "store": false, "reasoningEffort": "xhigh", "textVerbosity": "medium", "reasoningSummary":...