Error generating images with LocalAI integrated with Nextcloud AI (CPU only)
LocalAI version: localai/localai:master-ffmpeg-core
Environment, CPU architecture, OS, and Version:
- Docker Image:
localai/localai:master-ffmpeg-core - Environment Variable:
CMAKE_ARGS=-DGGML_AVX512=OFF - CPU: Intel(R) Xeon(R) CPU E3-1240
- CPU Flags:
fpu vme de pse tsc msr pae mce cx8 apic sep mtrr pge mca cmov pat pse36 clflush dts acpi mmx fxsr sse sse2 ss ht tm pbe syscall nx pdpe1gb rdtscp lm constant_tsc art arch_perfmon pebs bts rep_good nopl xtopology nonstop_tsc cpuid aperfmperf pni pclmulqdq dtes64 monitor ds_cpl vmx smx est tm2 ssse3 sdbg fma cx16 xtpr pdcm pcid sse4_1 sse4_2 x2apic movbe popcnt tsc_deadline_timer aes xsave avx f16c rdrand lahf_lm abm 3dnowprefetch cpuid_fault epb invpcid_single pti ssbd ibrs ibpb stibp tpr_shadow vnmi flexpriority ept vpid ept_ad fsgsbase tsc_adjust bmi1 avx2 smep bmi2 erms invpcid mpx rdseed adx smap clflushopt intel_pt xsaveopt xsavec xgetbv1 xsaves dtherm ida arat pln pts hwp hwp_notify hwp_act_window hwp_epp md_clear flush_l1d arch_capabilities - CPU Capabilities:
- AVX: Found
- AVX2: Found
- AVX512: Not Found
- GPU: Not available
Description:
I am encountering an error while generating images using LocalAI integrated with Nextcloud AI. The error appears in the LocalAI logs as follows:
ERR Server error error="could not load model: rpc error: code = Unknown desc = stat /build/models/stablediffusion: no such file or directory" ip=172.18.1.2 latency=2.004227305s method=POST status=500 url=/v1/images/generations
However, when I use the LocalAI web interface directly, image generation works fine.
Steps to Reproduce:
- Integrate LocalAI with Nextcloud AI.
- Attempt to generate an image through the Nextcloud AI interface.
Attempts to Resolve:
- Set the environment variable
CMAKE_ARGS=-DGGML_AVX512=OFF. - Tried both with and without
REBUILD=true.
Observations:
- Image generation works fine when using the LocalAI web interface directly.
- Error occurs specifically when using Nextcloud AI integration.
Expected Behavior:
Image generation should work seamlessly when using Nextcloud AI integration with LocalAI, similar to when using the LocalAI web interface directly.
I can try to reproduce this on Monday. Which Nextcloud an app Version are you using and what are your settings for the integration? Can you get the request from the LocalAI log and not only the error message?
Sure.
I am using:
nextcloudversion29.0.4Nextcloud Assistantapp, version1.1.0https://github.com/nextcloud/assistantOpenAI and LocalAI integrationapp, version2.0.3https://github.com/nextcloud/integration_openailocalaiversionmaster
Here is the docker compose I am using:
services:
db:
image: postgres:16-alpine
environment:
- POSTGRES_DB=nextcloud
- POSTGRES_USER=nextcloud
- POSTGRES_PASSWORD=nextcloud
volumes:
- db:/var/lib/postgresql/data
restart: unless-stopped
app:
image: nextcloud:29-apache
environment:
- POSTGRES_HOST=db
ports:
- 8000:80
volumes:
- nextcloud:/var/www/html
restart: unless-stopped
depends_on:
- db
local-ai:
image: localai/localai:master-ffmpeg-core
environment:
#- REBUILD=true
- CMAKE_ARGS=-DGGML_AVX512=OFF
ports:
- 8080:8080
volumes:
- ai_models:/build/models
restart: unless-stopped
volumes:
db:
nextcloud:
ai_models:
And I installed model stablediffusion-cpp from localai interface.
The error I see in nextcloud is:
API request error : Server error: `POST http://local-ai:8080/v1/images/generations` resulted in a `500 Internal Server Error` response: {"error":{"code":500,"message":"could not load model: rpc error: code = Unknown desc = stat /build/models/stablediffusio (truncated...)
The configuration is simple, just added the local url in the app settings http://local-ai:8080 and moved the selector for text completion endpoint to chat completions. It automatically set all the other fields.
I want to confirm that all the other features work well: translation, text processing, speech to text.
@adripo did you installed any model in LocalAI?
@mudler Yes, as I already mentioned in my previous response, I installed stablediffusion-cpp for image generation and it works fine from LocalAI interface, but will return that specific error when running from nextcloud. Text generation works well with any model.
And I installed model
stablediffusion-cppfrom localai interface.
I have the same Versions installed except I have local-ai:v2.19.2-ffmpeg and Nextcloud is not installed via docker. When I run the image generation the time localai returns:
10:38AM ERR failed starting/connecting to the gRPC service error="rpc error: code = Unavailable desc = connection error: desc = \"transport: Error while dialing: dial tcp 127.0.0.1:40341: connect: connection refused\""
10:38AM DBG GRPC Service NOT ready
10:38AM ERR Server error error="grpc service not ready" ip=172.20.0.1 latency=40.353998174s method=POST status=500 url=/v1/images/generations
On the second try everything works in LocalAI but Nextcloud does not show the result. This is the request send from Nextcloud:
10:37AM DBG Request received: {"model":"","language":"","translate":false,"n":0,"top_p":null,"top_k":null,"temperature":null,"max_tokens":null,"echo":false,"batch":0,"ignore_eos":false,"repeat_penalty":0,"repeat_last_n":0,"n_keep":0,"frequency_penalty":0,"presence_penalty":0,"tfz":null,"typical_p":null,"seed":null,"negative_prompt":"","rope_freq_base":0,"rope_freq_scale":0,"negative_prompt_scale":0,"use_fast_tokenizer":false,"clip_skip":0,"tokenizer":"","file":"","size":"1024x1024","prompt":"cat","instruction":"","input":null,"stop":null,"messages":null,"functions":null,"function_call":null,"stream":false,"mode":0,"step":0,"grammar":"","grammar_json_functions":null,"backend":"","model_base_name":""}
It seems Nextcloud does not call for a model, in the past they asked for dall-e and this was not configurable. Maybe this helps to figure if this is a Nextcloud or LocalAI Problem.
has anyone figured it out? just in case it helps, here is the full log, since the original post only holds truncated version:
OpenAI/LocalAI's text to image generation failed with: API request error: could not load model: rpc error: code = Unknown desc = stat /models/stablediffusion: no such file or directory
edit: I made a PR which fixes it if you are running "stablediffusion-cpp" model: https://github.com/nextcloud/integration_openai/pull/118
I kept running into this issue:
OpenAI/LocalAI's text-to-image generation failed with: API request error: Resource not found
api_1 | 6:55PM WRN Client error ip=192.168.0.24 latency="84.841µs" method=POST status=404 url=/images/generations
You must append /v1/ to the URL, as the API requires it for image generation.
My chat is acting up:
Warning context_chat RuntimeException: Error during request to Context Chat Backend (ExApp): ExApp `context_chat_backend` not found.
Temporary problem with indexing, trying again soon.
After restarting, it suddenly worked. However, the images still don’t appear for me.
On the bright side, my PC is finally generating images! But strangely, the prompts are missing:
"prompt": "<positive prompt>|<negative prompt>"
I had the same issue using the web interface.
Solution for a working API request
To get more than just the chat working, make sure you use the correct API URL format:
http://IP:PORT/v1/
Just as the documentation states!
Further issues (timeouts)
On my second attempt, I ran into errors due to server overload:
no app in context RuntimeException: OpenAI/LocalAI's text-to-image generation failed with: cURL error 28: Operation timed out after 240002 milliseconds with 0 bytes received
integration_openai ConnectException: cURL error 28: Operation timed out after 240002 milliseconds with 0 bytes received
api_1 | 7:35PM DBG GRPC(stablediffusion-127.0.0.1:34215): stderr Writing PNG
api_1 | 7:35PM DBG GRPC(stablediffusion-127.0.0.1:34215): stderr DST: /tmp/generated/images/b64336988229.png
api_1 | 7:35PM DBG GRPC(stablediffusion-127.0.0.1:34215): stderr Width: 512
api_1 | 7:35PM DBG GRPC(stablediffusion-127.0.0.1:34215): stderr Height: 512
api_1 | 7:35PM DBG GRPC(stablediffusion-127.0.0.1:34215): stderr Channel: 3
api_1 | 7:35PM DBG GRPC(stablediffusion-127.0.0.1:34215): stderr Data: 0x78ee44451700
api_1 | 7:35PM DBG GRPC(stablediffusion-127.0.0.1:34215): stderr Saved resulting image to '/tmp/generated/images/b64336988229.png'
api_1 | 7:35PM DBG Response: {"created":1743449736,"id":"9cab5633-8a2e-4720-aef1-5830e8fecba0","data":[{"embedding":null,"index":0,"url":"http://192.168.0.24:8080/generated-images/b64336988229.png"}],"usage":{"prompt_tokens":0,"completion_tokens":0,"total_tokens":0}}
api_1 | 7:35PM INF Success ip=192.168.0.24 latency=10m9.479875304s method=POST status=200 url=/v1/images/generations
At least the server is now requesting the images. 😅
The images are being generated, but either take too long or aren't requested properly.
After increasing the request limit in the options and lowering the settings to speed up the process, the limit was still not accepted. This led to another error during image generation, even though the process was still running most likely because cURL timed out.
Conclusion:
- Don't forget
/v1/in the API URL, or you'll get errors! - For now, it’s still unreliable.
This issue is stale because it has been open 90 days with no activity. Remove stale label or comment or this will be closed in 5 days.
This issue was closed because it has been stalled for 5 days with no activity.