LocalAI icon indicating copy to clipboard operation
LocalAI copied to clipboard

Docker Update Wipes Configuration

Open wormuths opened this issue 1 month ago • 5 comments

LocalAI version: v3.5.4

Environment, CPU architecture, OS, and Version: Docker on an UnRAID NAS.

Describe the bug I may be confused in how this works.

I have everything setup and working. I am using the Docker library in UnRAID to set this up, but if I update when it tells me there is a new version available, it reverts to its unconfigured state. Everything I configured needs to be configured again.

I don't have any models in my "models" folder after configuring things. Should I add models to this folder, and then they auto-configure? Otherwise, why would there be nothing in this folder?

Image

How do I use this without constant re-configuring after each upgrade?

To Reproduce Update your Docker container.

Expected behavior Configuration that was done should remain, and only the software should update.

Logs

Additional context This may be obvious somewhere, but I couldn't find the information in documentation. This is great software, thank you.

wormuths avatar Oct 30 '25 19:10 wormuths

I had the same problem.
Initially, I tried using docker's volume option to mount a host models folder into the container's /model - which did preserve the configuration, but this was extremely slow due to docker's poor file r/w performance on window's hosts. Ideally there would be a /config folder for volume mounting on the host (to preserve the configuration settings) that would keep the .yaml files for the models and re-download the models as needed.

loopy321 avatar Nov 02 '25 17:11 loopy321

I had the same problem. Initially, I tried using docker's volume option to mount a host models folder into the container's /model - which did preserve the configuration, but this was extremely slow due to docker's poor file r/w performance on window's hosts. Ideally there would be a /config folder for volume mounting on the host (to preserve the configuration settings) that would keep the .yaml files for the models and re-download the models as needed.

I'm a novice with Docker, but UnRAID makes it pretty easy usually. I used the instructions for the container, but this was the result. Is this not using the /config folder like it's supposed to?

Are you using UnRAID also? If so, how "slow" is "extremely slow"? If it solves this problem, I would like to try it. If it's slow to load up initially, that would be fine. This isn't really a demanding use case for me. I'd rather it be fixed so that the config isn't wiped on upgrade though...

wormuths avatar Nov 02 '25 19:11 wormuths

I am not an unraid user, just using Docker Desktop in Windows 11. I used the following docker command which did preserve my configured models, but was slower (4x) than not using the volume mapping. I've read that the volume issue might be a windows host problem, so it might work well for you on a linux-based system. The "-v $host_folder:/models" is the important part. docker run -ti --name local-ai -e DEBUG=true -p 8080:8080 --restart=unless-stopped -v //d/docker/local-ai/models:/models --gpus all -d localai/localai:latest-aio-gpu-nvidia-cuda-12

loopy321 avatar Nov 03 '25 00:11 loopy321

I am not an unraid user, just using Docker Desktop in Windows 11. I used the following docker command which did preserve my configured models, but was slower (4x) than not using the volume mapping. I've read that the volume issue might be a windows host problem, so it might work well for you on a linux-based system. The "-v $host_folder:/models" is the important part. docker run -ti --name local-ai -e DEBUG=true -p 8080:8080 --restart=unless-stopped -v //d/docker/local-ai/models:/models --gpus all -d localai/localai:latest-aio-gpu-nvidia-cuda-12

Just out of curiosity, do you know what happens if I add a model (.gguf) file in the location I showed above?

Image

The "local_ai/models" folder is just a share on UnRAID mapped to the "build/models" folder in LocalAI. I couldn't find the documentation which explains it, but I'm wondering if that is how UnRAID does exactly what you are describing.

If anyone know the answer, I would appreciate some clarification.

wormuths avatar Nov 03 '25 17:11 wormuths

I wanted to update to the new LocalAI version v3.7.0 so figured this out and it worked for me. First, you need to run docker inspect <your_container> and extract the volume name associated with /models (and /backends , if desired). Mine was some long set of characters. Using the following docker-compose.yaml (derived from the one provided at https://localai.io/basics/container/ ) I was able to maintain configuration (/models folder) from my above docker run instance:

services:
  localai:
    container_name: localai
    #image: localai/localai:latest-aio-cpu
    #image: localai/localai:latest-aio-cpu
    # For a specific version:
    # image: localai/localai:v3.6.0-aio-cpu
    # For Nvidia GPUs decomment one of the following (cuda11 or cuda12):
    # image: localai/localai:v3.6.0-aio-gpu-nvidia-cuda-11
    # image: localai/localai:v3.6.0-aio-gpu-nvidia-cuda-12
    # image: localai/localai:latest-aio-gpu-nvidia-cuda-11
    image: localai/localai:latest-aio-gpu-nvidia-cuda-12
    healthcheck:
      test: ["CMD", "curl", "-f", "http://localhost:8080/readyz"]
      interval: 1m
      timeout: 20m
      retries: 5
    ports:
      - 8080:8080
    restart: unless-stopped
    environment:
      - DEBUG=true
      # ...
      - BUILD_TYPE=cublas
      - HEALTHCHECK_ENDPOINT=http://localhost:8080/readyz
      - NVIDIA_DRIVER_CAPABILITIES=compute,utility
      - NVIDIA_REQUIRE_CUDA=cuda>=12.0
      - NVIDIA_VISIBLE_DEVICES=all
    # decomment the following piece if running with Nvidia GPUs
    deploy:
      resources:
        reservations:
          devices:
            - driver: nvidia
              count: all
              capabilities: [gpu]
    volumes:
      - backends:/backends
      - models:/models
volumes:
  backends:
    external: true
    name: <your_prior_backends_volume name>
  models:
    external: true
    name: <your_prior_models_volume name>

You will need to stop your prior running container. Then do:

docker compose pull 
docker compose up -d

viola!

loopy321 avatar Nov 09 '25 16:11 loopy321