Load by default stablediffusion instead diffusers not working
LocalAI version:
LocalAI v2.19.4 Docker Image ID: d99f62d40302 / TAG: latest-cpu
Environment, CPU architecture, OS, and Version:
uname -a: Linux XXX 6.8.0-39-generic #39~22.04.1-Ubuntu SMP PREEMPT_DYNAMIC Wed Jul 10 15:35:09 UTC 2 x86_64 x86_64 x86_64 GNU/Linux CPU: AMD Ryzen 9 5950X 16-Core Processor RAM: 32 GB
Describe the bug
Load by default stablediffusion instead diffusers not working
To Reproduce
Pull latest latest-cpu image and run it with the the environnement variable COMPEL=0
I set up dreamshaper using local-ai models install dreamshaper, and change the "dreamshaper.yaml" in models folder with content:
name: stablediffusion
parameters:
model: DreamShaper_8_pruned.safetensors
backend: diffusers
step: 25
f16: false
diffusers:
pipeline_type: StableDiffusionPipeline
cuda: false
enable_parameters: "negative_prompt,num_inference_steps"
scheduler_type: "euler_a"
download_files:
- filename: DreamShaper_8_pruned.safetensors
uri: huggingface://Lykon/DreamShaper/DreamShaper_8_pruned.safetensors
usage: |
curl http://localhost:8080/v1/images/generations \
-H "Content-Type: application/json" \
-d '{
"prompt": "<positive prompt>|<negative prompt>",
"step": 25,
"size": "512x512"
}'
Then ran command as stated in docs:
curl http://localhost:8080/v1/images/generations \
-H "Content-Type: application/json" \
-d '{
"prompt": "A cat outdoor during sunny day",
"model": "stablediffusion",
"size": "512x512"
}'
Expected behavior
Unable to load the generation, always have this log
ERR failed starting/connecting to the gRPC service error="rpc error: code = Unavailable desc = connection error: desc = \"transport: Error while dialing: dial tcp 127.0.0.1:42383: connect: connection refused\""
However, when following the same steps using the localai/localai:v2.19.4-ffmpeg docker image, it works.
Logs
5:29PM INF Loading model 'DreamShaper_8_pruned.safetensors' with backend diffusers
5:29PM DBG Loading model in memory from file: /build/models/DreamShaper_8_pruned.safetensors
5:29PM DBG Loading Model DreamShaper_8_pruned.safetensors with gRPC (file: /build/models/DreamShaper_8_pruned.safetensors) (backend: diffusers): {backendString:diffusers model:DreamShaper_8_pruned.safetensors threads:32 assetDir:/tmp/localai/backend_data context:{emptyCtx:{}} gRPCOptions:0xc0006bd688 externalBackends:map[autogptq:/build/backend/python/autogptq/run.sh bark:/build/backend/python/bark/run.sh coqui:/build/backend/python/coqui/run.sh diffusers:/build/backend/python/diffusers/run.sh exllama:/build/backend/python/exllama/run.sh exllama2:/build/backend/python/exllama2/run.sh huggingface-embeddings:/build/backend/python/sentencetransformers/run.sh mamba:/build/backend/python/mamba/run.sh openvoice:/build/backend/python/openvoice/run.sh parler-tts:/build/backend/python/parler-tts/run.sh petals:/build/backend/python/petals/run.sh rerankers:/build/backend/python/rerankers/run.sh sentencetransformers:/build/backend/python/sentencetransformers/run.sh transformers:/build/backend/python/transformers/run.sh transformers-musicgen:/build/backend/python/transformers-musicgen/run.sh vall-e-x:/build/backend/python/vall-e-x/run.sh vllm:/build/backend/python/vllm/run.sh] grpcAttempts:20 grpcAttemptsDelay:2 singleActiveBackend:false parallelRequests:false}
5:29PM DBG Loading external backend: /build/backend/python/diffusers/run.sh
5:29PM DBG Loading GRPC Process: /build/backend/python/diffusers/run.sh
5:29PM DBG GRPC Service for DreamShaper_8_pruned.safetensors will be running at: '127.0.0.1:42383'
5:29PM DBG GRPC Service state dir: /tmp/go-processmanager3925853376
5:29PM DBG GRPC Service Started
5:29PM DBG GRPC(DreamShaper_8_pruned.safetensors-127.0.0.1:42383): stdout Initializing libbackend for build
5:29PM DBG GRPC(DreamShaper_8_pruned.safetensors-127.0.0.1:42383): stderr /build/backend/python/diffusers/../common/libbackend.sh: line 78: uv: command not found
5:29PM DBG GRPC(DreamShaper_8_pruned.safetensors-127.0.0.1:42383): stdout virtualenv created
5:29PM DBG GRPC(DreamShaper_8_pruned.safetensors-127.0.0.1:42383): stdout virtualenv activated
5:29PM DBG GRPC(DreamShaper_8_pruned.safetensors-127.0.0.1:42383): stdout activated virtualenv has been ensured
5:29PM DBG GRPC(DreamShaper_8_pruned.safetensors-127.0.0.1:42383): stderr /build/backend/python/diffusers/../common/libbackend.sh: line 83: /build/backend/python/diffusers/venv/bin/activate: No such file or directory
5:29PM DBG GRPC(DreamShaper_8_pruned.safetensors-127.0.0.1:42383): stderr /build/backend/python/diffusers/../common/libbackend.sh: line 155: exec: python: not found
5:29PM DBG [WatchDog] Watchdog checks for busy connections
5:29PM DBG [WatchDog] Watchdog checks for idle connections
5:29PM ERR failed starting/connecting to the gRPC service error="rpc error: code = Unavailable desc = connection error: desc = \"transport: Error while dialing: dial tcp 127.0.0.1:42383: connect: connection refused\""
5:29PM DBG GRPC Service NOT ready
@sestren cpu images do not have a python environment to save on size. just use the latest tag, or any other image that is not having the core tag. See also docs here: https://localai.io/basics/container/#standard-container-images
I tried the latest tag. However this tag also incorporate the call to download gpt4, vision, etc models.
I used latest-cpu instead of latest-aio-cpuor latest as per the documentation (i.e. One without the -core flag) as I understood it world act as the latest-aio-cpu without the packaged models.
This issue is stale because it has been open 90 days with no activity. Remove stale label or comment or this will be closed in 5 days.
This issue was closed because it has been stalled for 5 days with no activity.