Alpaca icon indicating copy to clipboard operation
Alpaca copied to clipboard

llama.cpp instance returns an error when trying to upload an image

Open Quant-ux opened this issue 3 months ago • 1 comments

Describe the bug

Alpaca fails to supply a model running on a local llama.cpp instance with an image, resulting in the following error message:

Error code: 500 - {'error': {'code': 500, 'message': 'Invalid image_url.url value', 'type': 'server_error'}}

I can confirm that the model that I use (gemma-3-4b-it-GGUF) can process images just fine when interacting with the instance through Newelle.

Additionally, while attaching an image, the following message is shown:

This model model might not be compatible with image recognition

Expected behavior

LLMs self-hosted using llama.cpp should be able to process images.

Screenshots

Image

Debugging information

$ flatpak run com.jeffser.Alpaca
INFO	[main.py | main] Alpaca version: 7.7.5
INFO	[_client.py | _send_single_request] HTTP Request: GET http://0.0.0.0:8000/models "HTTP/1.1 200 OK"
INFO	[_client.py | _send_single_request] HTTP Request: POST http://0.0.0.0:8000/chat/completions "HTTP/1.1 500 Internal Server Error"
INFO	[_base_client.py | _sleep_for_retry] Retrying request to /chat/completions in 0.389014 seconds
INFO	[_client.py | _send_single_request] HTTP Request: POST http://0.0.0.0:8000/chat/completions "HTTP/1.1 500 Internal Server Error"
INFO	[_base_client.py | _sleep_for_retry] Retrying request to /chat/completions in 0.986072 seconds
INFO	[_client.py | _send_single_request] HTTP Request: POST http://0.0.0.0:8000/chat/completions "HTTP/1.1 200 OK"
INFO	[_client.py | _send_single_request] HTTP Request: POST http://0.0.0.0:8000/chat/completions "HTTP/1.1 500 Internal Server Error"
ERROR	[openai_instances.py | generate_response] Error code: 500 - {'error': {'code': 500, 'message': 'Invalid image_url.url value', 'type': 'server_error'}}

Quant-ux avatar Aug 30 '25 13:08 Quant-ux

Alpaca uses the OpenAI library to talk to external instances (anything that isn't Ollama).

llama.cpp isn't officially supported and I'm not sure how it interacts with OpenAI.

Jeffser avatar Sep 08 '25 18:09 Jeffser