ollama crashes and doesn't handle it well
Describe the bug
I downloaded the Magistral model and I guess it is too big for my computer to handle. If I start a conversation and try to use this model I get an error that says connection error. This seems like a reasonable error message if ollama has crashed and Alpaca couldn't connect. But the user isn't told anything about the crash and ollama is not restarted. I use ollama ps it tells me the truth that ollama isn't running. If i go to Manage Instances menu option then Alpaca starts up the ollama instance again.
Expected behavior
if something causes ollama to crash it should immediately be restarted and the user informed.
Screenshots
If applicable, add screenshots to help explain your problem.
Debugging information
Please include the output of Alpaca, for this you'll need to run Alpaca from the terminal, then try to reproduce the error you want to report.
flatpak run com.jeffser.Alpaca
Are you using the integrated Ollama instance, because it doesn't sound like it?
I am using the managed instance.
Sincerely, Justin Sunseri
On Wed, Jun 18, 2025, 6:40 PM mags0ft @.***> wrote:
mags0ft left a comment (Jeffser/Alpaca#813) https://github.com/Jeffser/Alpaca/issues/813#issuecomment-2983672657
Are you using the integrated Ollama instance, because it doesn't sound like it?
— Reply to this email directly, view it on GitHub https://github.com/Jeffser/Alpaca/issues/813#issuecomment-2983672657, or unsubscribe https://github.com/notifications/unsubscribe-auth/AAGEPFZHIX237IJDLU5V5P33EE62TAVCNFSM6AAAAAB7KTXI3SVHI2DSMVQWIX3LMV43OSLTON2WKQ3PNVWWK3TUHMZDSOBTGY3TENRVG4 . You are receiving this because you authored the thread.Message ID: @.***>
Okay, but the managed service is confined to the sandbox. Did you run ollama ps inside of the sandbox or on your host machine? Those are two different instances, the port of the integrated instance is higher by one.
To be clear about the integrated instance, it is a different binary that lives inside of the sandbox, if you already had Ollama installed you could use it directly with Alpaca by adding a new Ollama (not integrated) instance.
Of course your system's Ollama instance and the sandboxed one will have different models
It seems to me like they are 1 and the same [image: image.png]
[image: image.png]
Sincerely, Justin Sunseri
On Thu, Jun 19, 2025 at 5:59 AM Jeffry Samuel @.***> wrote:
Jeffser left a comment (Jeffser/Alpaca#813) https://github.com/Jeffser/Alpaca/issues/813#issuecomment-2985809753
To be clear about the integrated instance, it is a different binary that lives inside of the sandbox, if you already had Ollama installed you could use it directly with Alpaca by adding a new Ollama (not integrated) instance.
Of course your system's Ollama instance and the sandboxed one will have different models
— Reply to this email directly, view it on GitHub https://github.com/Jeffser/Alpaca/issues/813#issuecomment-2985809753, or unsubscribe https://github.com/notifications/unsubscribe-auth/AAGEPF2GXZMUQZR76VQYBF33EHON5AVCNFSM6AAAAAB7KTXI3SVHI2DSMVQWIX3LMV43OSLTON2WKQ3PNVWWK3TUHMZDSOBVHAYDSNZVGM . You are receiving this because you authored the thread.Message ID: @.***>
The images weren't uploaded correctly, at least they don't render for me. It just says [image: image.png].
Sorry about that!
Are you using the Flatpak version of Alpaca?
yes from flathub on fedora
The I can explain what is happening here, you have two installations of Ollama, one at a system level and one that is sandboxed, the sandboxed can't start because the system one already is using the network port so Alpaca just uses the system instance without managing it
I suggest you uninstall the Ollama extension from Alpaca and make a normal (not managed) Ollama instance.
I have uninstalled the system ollama instance so i should only have the managed instance and the same error happens.
The Ollama installation you have on your system can't be managed by Alpaca, you are not using a managed instance
To the best of my knowledge it can manage the instance. What can I do to demonstrate this?