won't write to txt file, no error
linux garuda llama.cpp openai endpoint etc. 1.12.0 qwen3 30b coder
me again.
<tool_call> <function=developer__text_editor
⏱ Elapsed time: 29m 31s Context: ●○○○○○○○○○ 5% (12182/262144 tokens) ( O)>
no errors, no idea what it was doing, but it wasn't writing to a text file like i asked.
Sorry for the low effort but i'm like microns from just giving up on goose entirely. :(
Literally I've done nothing but fight with it. No useful output at all even once.
It can read files and talk but not write files i guess to maybe resume files? i don't know.
Is it a case of a slower or older machine? Might need to wait a lot longer. Unfortunately these LLM tools generally need an M2 or greater mac or equivalent PC
Also which version are you on is it a newer version of goose?
goose 1.12.0 (sorry i wasn't clear before)
And maybe, my machine is decent ish? But if it's a timeout issue of some kind can i just feature request an explicit error message? Is there a console or error log somewhere? Like i said i'm not seeing explicit errors. CPU: Intel(R) Core(TM) Ultra 9 185H (22) z GPU: Intel Arc Graphics Memory: 9333MiB / 31595MiB
can you generate a diagnostics report
can you generate a diagnostics report
A what?
Its the little bug icon near the chat input
I'm sorry I must have screwed up somewhere, I'm using the CLI thing. Like goose in my arch linux (garuda) terminal. Is there a CLI equivalent?
can check the logs in these locations https://block.github.io/goose/docs/guides/logs
yeah, you can do
session diagnostics --session-id=20251022_10
I'm sorry, the session doesn't help all that much. it says can you see the file in this folder? as instructions and from the requests it looks like it was busy looking at various files but never really finished, does that line up with what you were trying to do?
I'm sorry, the session doesn't help all that much. it says can you see the file in this folder? as instructions and from the requests it looks like it was busy looking at various files but never really finished, does that line up with what you were trying to do?
Yea i think in this instance I basically just made a text file in a folder and launched goose from there and asked if it could read it and to write stuff to it etc. I've kinda forgotten. I havent attempted to use goose since.
right. I'm happy to look further into this, but without the details I am not sure what I can do.
Ok new round. Made a folder, did some tests, it folded at a view step, no error message, no crash.
Documents/-=Original=-/goosetext ❯ goose starting session | provider: ollama model: qwen3-coder session id: 20251104_1 working directory: /home/innomen/Media/Documents/-=Original=-/goosetext
goose is running! Enter your instructions, or try asking what goose can do.
Context: ○○○○○○○○○○ 0% (0/262144 tokens) ( O)> Can you create a text file in this directory that says "hello world" in it? I'll create a text file with "hello world" in it using the text editor tool. <tool_call> <function=developer__text_editor ─── text_editor | developer ────────────────────────── path: ~/Media/Documents/-=Original=-/goosetext/hello.txt command: write file_text: hello world
I've successfully created a text file named hello.txt in the current directory with the content "hello world".
You can verify this by viewing the file: <tool_call> <function=developer__text_editor ─── text_editor | developer ────────────────────────── path: ~/Media/Documents/-=Original=-/goosetext/hello.txt command: view
Yes, I've successfully created a text file named hello.txt in the current directory with the content "hello world".
The file is now available at /home/innomen/Media/Documents/-=Original=-/goosetext/hello.txt and contains exactly what you requested: "hello world".
⏱ Elapsed time: 4m 50s Context: ○○○○○○○○○○ 2% (5362/262144 tokens) ( O)> i've added a text file to the directory, i'd like you to write your summary and thoughts on the content of that file to a new file in the same directory. I'll create a new file that summarizes and provides thoughts on the content of the hello.txt file.
First, let me check what's in the hello.txt file to understand its content: <tool_call> <function=developer__text_editor ─── text_editor | developer ────────────────────────── path: ~/Media/Documents/-=Original=-/goosetext/hello.txt command: view
Now I'll create a new file with my summary and thoughts on the hello.txt file content. <tool_call> <function=developer__text_editor ─── text_editor | developer ────────────────────────── path: ~/Media/Documents/-=Original=-/goosetext/summary.txt command: write file_text: Summary and Thoughts on hello.txt
Content Analysis:
- The file contains the text "hello world"
- This is a classic example often used in programming tutorials
- It's a simple...
I've created a new file called summary.txt in the same directory with my analysis and thoughts about the hello.txt file.
The summary file contains:
- An analysis of the content ("hello world")
- Thoughts on why this particular text is significant
- Reflections on its role in programming education
- A brief conclusion about its meaning
You can view the summary by running: <tool_call> <function=developer__text_editor <function=developer__text_editor ─── text_editor | developer ────────────────────────── path: ~/Media/Documents/-=Original=-/goosetext/summary.txt command: view
I've successfully created a summary.txt file in the same directory with my analysis of the hello.txt file. The summary includes:
- A content analysis of "hello world"
- Thoughts on why this text is significant in programming education
- Reflections on its role as a traditional first program
- A conclusion about its meaning
The summary file is now available at /home/innomen/Media/Documents/-=Original=-/goosetext/summary.txt and contains my thoughts on the simple but meaningful "hello world" text file.
⏱ Elapsed time: 2m 57s Context: ○○○○○○○○○○ 2% (6181/262144 tokens) ( O)> i meant the new file i added to the directory, not the hello world file You're absolutely right, I apologize for the misunderstanding. Let me check what new file you added to the directory and then create a summary of that file's content.
First, let me see what files are currently in the directory: <tool_call> <function=developer__shell ─── shell | developer ────────────────────────── command: ls -la /home/innomen/Media/Documents/-=Original=-/goosetext/
I see there's a sourcery.txt file that was added more recently (12:46) than the summary.txt file (12:48). Let me check the contents of the sourcery.txt file to understand what you added:
<tool_call>
<function=developer__text_editor
<function=developer__text_editor
─── text_editor | developer ──────────────────────────
path: ~/Media/Documents/-=Original=-/goosetext/sourcery.txt
command: view
⏱ Elapsed time: 11m 35s Context: ○○○○○○○○○○ 3% (6932/262144 tokens) ( O)> exit Closing session. Session ID: 20251104_1
Documents/-=Original=-/goosetext took 40m26s ❯ goose session diagnostics --session-id=20251104_1 Generating diagnostics bundle for session '20251104_1'... Diagnostics bundle saved to: diagnostics_20251104_1.zip
Documents/-=Original=-/goosetext ❯
thanks. frankly there is not a lot of things to see here in the logs. the model just seems to not reply to the last tool call but just cal it quits. I have this PR: https://github.com/block/goose/pull/5609 that adds some more logging etc to ollama
at the same time though, I just haven't seen qwen do a great job tool calling. qwen-code is pretty big but still it is more about writing code maybe?
Well the irony there then is that what i want is a writing tool. Is plaintext/markdown work really more complicated than code?
I fear I’ve wasted your time. GPT is saying it’s an internal model issue. Not a Goose, llama.cpp, or SillyTavern problem.
What’s happening is that the model (Qwen3-30B-Coder) successfully processes the context and issues a <tool_call>, but then never continues after the tool call completes. The orchestrator (Goose) waits for the model to resume, and the model thinks the turn is over, so both sides stop, no crash, no error, just silence.
It’s a deadlock between Goose waiting for the model’s continuation and the model waiting for something else that never arrives. Qwen models tend to do this because they expect structured JSON-style tool call formats, and Goose uses a different token style.
TL;DR: it’s the model, not Goose, llama.cpp, or SillyTavern.
https://github.com/QwenLM/qwen-code/issues/176
Edit: Switched models and it ran for a lot longer but still failed XD
~/AI/-=Assistant=- took 5s ❯ goose starting session | provider: ollama model: qwen3-coder session id: 20251108_1 working directory: /home/innomen/AI/-=Assistant=-
goose is running! Enter your instructions, or try asking what goose can do.
Context: ○○○○○○○○○○ 0% (0/262144 tokens) ( O)> In this directory you'll find a file of biographical data and the summary of a new outlook on what can be known. Write a brief overview of your thoughts in a file called AI.md (use ma rkdown format please.)
─── create_task | dynamic_task ────────────────────────── task_parameters:
extensions: instructions: Write a brief overview of my thoughts in a file called AI.md (use markdown format please). return_last_only: true
─── execute_task | subagent ────────────────────────── execution_mode: sequential task_ids: dd43e5a6-af57-4c30-9ece-b108d07dbeb5
thread 'main' has overflowed its stack fatal runtime error: stack overflow, aborting fish: Job 1, 'goose' terminated by signal SIGABRT (Abort)
~/AI/-=Assistant=- took 3h59m43s ❯
thanks for diving into this, I learned something too. we'll add better error logging in https://github.com/block/goose/pull/5609