stable-diffusion-webui-docker
stable-diffusion-webui-docker copied to clipboard
Failed to start invoke with error "Can't load tokenizer for 'openai/clip-vit-large-patch14'."
Failed to start invoke with error Can't load tokenizer for 'openai/clip-vit-large-patch14'. Even rm the data/.cache folder, still not work...
Could you give a suggestion?
Thanks
docker compose --profile invoke up --build
[+] Building 7.9s (24/24) FINISHED
=> [internal] load build definition from Dockerfile 0.4s
=> => transferring dockerfile: 32B 0.0s
=> [internal] load .dockerignore 0.3s
=> => transferring context: 2B 0.0s
=> resolve image config for docker.io/docker/dockerfile:1 2.6s
=> CACHED docker-image://docker.io/docker/dockerfile:1@sha256:39b85bbfa7536a5feceb7372a0817649ecb2724562a38360f4d6a7782a4 0.0s
=> [internal] load .dockerignore 0.0s
=> [internal] load build definition from Dockerfile 0.3s
=> => transferring dockerfile: 32B 0.0s
=> [internal] load metadata for docker.io/library/python:3.10-slim 0.9s
=> [internal] load metadata for docker.io/library/alpine:3.17 1.3s
=> [internal] load build context 0.5s
=> => transferring context: 96B 0.0s
=> [xformers 1/3] FROM docker.io/library/alpine:3.17@sha256:69665d02cb32192e52e07644d76bc6f25abeb5410edc1c7a81a10ba3f0efb 0.0s
=> [stage-1 1/11] FROM docker.io/library/python:3.10-slim@sha256:68b5a9d037476051abdf6b00fd6a9a978455a33d5d32ce79b8d183b 0.0s
=> CACHED [stage-1 2/11] RUN --mount=type=cache,target=/root/.cache/pip pip install torch==1.13.1+cu117 torchvision --ex 0.0s
=> CACHED [stage-1 3/11] RUN apt-get update && apt-get install git -y && apt-get clean 0.0s
=> CACHED [stage-1 4/11] RUN git clone https://github.com/invoke-ai/InvokeAI.git /stable-diffusion 0.0s
=> CACHED [stage-1 5/11] WORKDIR /stable-diffusion 0.0s
=> CACHED [stage-1 6/11] RUN --mount=type=cache,target=/root/.cache/pip <<EOF (git reset --hard f232068ab89bd80e4f5f3133 0.0s
=> CACHED [stage-1 7/11] RUN <<EOF (apt-get update...) 0.0s
=> CACHED [stage-1 8/11] RUN --mount=type=cache,target=/root/.cache/pip <<EOF (git fetch...) 0.0s
=> CACHED [xformers 2/3] RUN apk add --no-cache aria2 0.0s
=> CACHED [xformers 3/3] RUN aria2c -x 5 --dir / --out wheel.whl 'https://github.com/AbdBarho/stable-diffusion-webui-dock 0.0s
=> CACHED [stage-1 9/11] RUN --mount=type=cache,target=/root/.cache/pip --mount=type=bind,from=xformers,source=/wheel. 0.0s
=> CACHED [stage-1 10/11] RUN touch invokeai.init 0.0s
=> CACHED [stage-1 11/11] COPY . /docker/ 0.0s
=> exporting to image 0.5s
=> => exporting layers 0.0s
=> => writing image sha256:8d125b74a5d817c98ca3302a3abaf974ca41dc5481ae045f1cf915423cad769e 0.0s
=> => naming to docker.io/library/sd-invoke:25 0.0s
[+] Running 1/0
⠿ Container webui-docker-invoke-1 Created 0.0s
Attaching to webui-docker-invoke-1
webui-docker-invoke-1 | Mounted .cache
webui-docker-invoke-1 | Mounted embeddings
webui-docker-invoke-1 | mkdir: created directory '/data/.cache/huggingface'
webui-docker-invoke-1 | mkdir: created directory '/data/.cache/huggingface/transformers/'
webui-docker-invoke-1 | Mounted transformers
webui-docker-invoke-1 | mkdir: created directory '/data/.cache/invoke'
webui-docker-invoke-1 | mkdir: created directory '/data/.cache/invoke/clipseg/'
webui-docker-invoke-1 | Mounted clipseg
webui-docker-invoke-1 | Mounted .cache
webui-docker-invoke-1 | Mounted RealESRGAN
webui-docker-invoke-1 | Mounted transformers
webui-docker-invoke-1 | Mounted GFPGANv1.4.pth
webui-docker-invoke-1 | Mounted Codeformer
webui-docker-invoke-1 | Mounted transformers
webui-docker-invoke-1 | + invokeai --web --host 0.0.0.0 --port 7860 --config /docker/models.yaml --root_dir /stable-diffusion --outdir /output/invoke
webui-docker-invoke-1 | * Initializing, be patient...
webui-docker-invoke-1 | >> Initialization file /stable-diffusion/invokeai.init found. Loading...
webui-docker-invoke-1 | >> Internet connectivity is False
webui-docker-invoke-1 | >> InvokeAI, version 2.3.1
webui-docker-invoke-1 | >> InvokeAI runtime directory is "/stable-diffusion"
webui-docker-invoke-1 | >> GFPGAN Initialized
webui-docker-invoke-1 | ## NOT FOUND: CodeFormer model not found at /stable-diffusion/models/codeformer/codeformer.pth
webui-docker-invoke-1 | >> CodeFormer Disabled
webui-docker-invoke-1 | >> ESRGAN Initialized
webui-docker-invoke-1 | >> Using device_type cpu
webui-docker-invoke-1 | >> xformers memory-efficient attention is available but disabled
webui-docker-invoke-1 | >> NSFW checker is disabled
webui-docker-invoke-1 | >> Current VRAM usage: 0.00G
webui-docker-invoke-1 | >> Loading stable-diffusion-1.5 from /data/StableDiffusion/v1-5-pruned-emaonly.ckpt
webui-docker-invoke-1 | >> Scanning Model: stable-diffusion-1.5
webui-docker-invoke-1 | >> Model scanned ok
webui-docker-invoke-1 | >> Loading stable-diffusion-1.5 from /data/StableDiffusion/v1-5-pruned-emaonly.ckpt
webui-docker-invoke-1 | | Forcing garbage collection prior to loading new model
webui-docker-invoke-1 | | LatentDiffusion: Running in eps-prediction mode
webui-docker-invoke-1 | | DiffusionWrapper has 859.52 M params.
webui-docker-invoke-1 | | Making attention of type 'vanilla' with 512 in_channels
webui-docker-invoke-1 | | Working with z of shape (1, 4, 32, 32) = 4096 dimensions.
webui-docker-invoke-1 | | Making attention of type 'vanilla' with 512 in_channels
webui-docker-invoke-1 | ** model stable-diffusion-1.5 could not be loaded: Can't load tokenizer for 'openai/clip-vit-large-patch14'. If you were trying to load it from 'https://huggingface.co/models', make sure you don't have a local directory with the same name. Otherwise, make sure 'openai/clip-vit-large-patch14' is the correct path to a directory containing all relevant files for a CLIPTokenizer tokenizer.
webui-docker-invoke-1 | ** trying to reload previous model
webui-docker-invoke-1 | >> Loading stable-diffusion-1.5 from /data/StableDiffusion/v1-5-pruned-emaonly.ckpt
webui-docker-invoke-1 | Traceback (most recent call last):
webui-docker-invoke-1 | File "/stable-diffusion/ldm/generate.py", line 956, in set_model
webui-docker-invoke-1 | model_data = cache.get_model(model_name)
webui-docker-invoke-1 | File "/stable-diffusion/ldm/invoke/model_manager.py", line 119, in get_model
webui-docker-invoke-1 | requested_model, width, height, hash = self._load_model(model_name)
webui-docker-invoke-1 | File "/stable-diffusion/ldm/invoke/model_manager.py", line 349, in _load_model
webui-docker-invoke-1 | model, width, height, model_hash = self._load_ckpt_model(
webui-docker-invoke-1 | File "/stable-diffusion/ldm/invoke/model_manager.py", line 450, in _load_ckpt_model
webui-docker-invoke-1 | model = instantiate_from_config(omega_config.model)
webui-docker-invoke-1 | File "/stable-diffusion/ldm/util.py", line 93, in instantiate_from_config
webui-docker-invoke-1 | return get_obj_from_str(config["target"])(**config.get("params", dict()), **kwargs)
webui-docker-invoke-1 | File "/stable-diffusion/ldm/models/diffusion/ddpm.py", line 659, in init
webui-docker-invoke-1 | self.instantiate_cond_stage(cond_stage_config)
webui-docker-invoke-1 | File "/stable-diffusion/ldm/models/diffusion/ddpm.py", line 777, in instantiate_cond_stage
webui-docker-invoke-1 | model = instantiate_from_config(config)
webui-docker-invoke-1 | File "/stable-diffusion/ldm/util.py", line 93, in instantiate_from_config
webui-docker-invoke-1 | return get_obj_from_str(config["target"])(**config.get("params", dict()), kwargs)
webui-docker-invoke-1 | File "/stable-diffusion/ldm/modules/encoders/modules.py", line 251, in init
webui-docker-invoke-1 | self.tokenizer = tokenizer or CLIPTokenizer.from_pretrained(
webui-docker-invoke-1 | File "/usr/local/lib/python3.10/site-packages/transformers/tokenization_utils_base.py", line 1788, in from_pretrained
webui-docker-invoke-1 | raise EnvironmentError(
webui-docker-invoke-1 | OSError: Can't load tokenizer for 'openai/clip-vit-large-patch14'. If you were trying to load it from 'https://huggingface.co/models', make sure you don't have a local directory with the same name. Otherwise, make sure 'openai/clip-vit-large-patch14' is the correct path to a directory containing all relevant files for a CLIPTokenizer tokenizer.
webui-docker-invoke-1 |
webui-docker-invoke-1 | >> Scanning Model: stable-diffusion-1.5
webui-docker-invoke-1 | >> Model scanned ok
webui-docker-invoke-1 | >> Loading stable-diffusion-1.5 from /data/StableDiffusion/v1-5-pruned-emaonly.ckpt
webui-docker-invoke-1 | | Forcing garbage collection prior to loading new model
webui-docker-invoke-1 | | LatentDiffusion: Running in eps-prediction mode
webui-docker-invoke-1 | | DiffusionWrapper has 859.52 M params.
webui-docker-invoke-1 | | Making attention of type 'vanilla' with 512 in_channels
webui-docker-invoke-1 | | Working with z of shape (1, 4, 32, 32) = 4096 dimensions.
webui-docker-invoke-1 | | Making attention of type 'vanilla' with 512 in_channels
webui-docker-invoke-1 | ** An error occurred while attempting to initialize the model: "Can't load tokenizer for 'openai/clip-vit-large-patch14'. If you were trying to load it from 'https://huggingface.co/models', make sure you don't have a local directory with the same name. Otherwise, make sure 'openai/clip-vit-large-patch14' is the correct path to a directory containing all relevant files for a CLIPTokenizer tokenizer."
webui-docker-invoke-1 | ** This can be caused by a missing or corrupted models file, and can sometimes be fixed by (re)installing the models.
webui-docker-invoke-1 | ╭───────────────────── Traceback (most recent call last) ──────────────────────╮
webui-docker-invoke-1 | │ /usr/local/bin/invokeai:8 in
This is usually a network problem, when downloading stuff from the huggingface API.
see https://github.com/AbdBarho/stable-diffusion-webui-docker/issues/337
I will check again when I am home if I can reproduce it consistently.
I managed to reproduce the problem, working on a fix.
I implemented a fix, can you try again from master?
I implemented a fix, can you try again from master?
I have tried again. The fix is good.
But, I got another quesiton. The engine only work on CPU instead of the GPU..... Is there any upstream package compatible issue cause it?
webui-docker-invoke-1 | ** SKIPPING DIFFUSION WEIGHTS DOWNLOAD PER USER REQUEST ** webui-docker-invoke-1 | webui-docker-invoke-1 | ** INVOKEAI INSTALLATION SUCCESSFUL ** webui-docker-invoke-1 | If you installed manually from source or with 'pip install': activate the virtual environment webui-docker-invoke-1 | then run one of the following commands to start InvokeAI. webui-docker-invoke-1 | webui-docker-invoke-1 | Web UI: webui-docker-invoke-1 | invokeai --web # (connect to http://localhost:9090) webui-docker-invoke-1 | invokeai --web --host 0.0.0.0 # (connect to http://your-lan-ip:9090 from another computer on the local network) webui-docker-invoke-1 | webui-docker-invoke-1 | Command-line interface: webui-docker-invoke-1 | invokeai webui-docker-invoke-1 | webui-docker-invoke-1 | If you installed using an installation script, run: webui-docker-invoke-1 | /stable-diffusion/invoke.sh webui-docker-invoke-1 | webui-docker-invoke-1 | Add the '--help' argument to see all of the command-line switches available for use. webui-docker-invoke-1 | webui-docker-invoke-1 | + invokeai --web --host 0.0.0.0 --port 7860 --config /docker/models.yaml --root_dir /stable-diffusion --outdir /output/invoke webui-docker-invoke-1 | * Initializing, be patient... webui-docker-invoke-1 | >> Initialization file /stable-diffusion/invokeai.init found. Loading... webui-docker-invoke-1 | >> Internet connectivity is False webui-docker-invoke-1 | >> InvokeAI, version 2.3.1 webui-docker-invoke-1 | >> InvokeAI runtime directory is "/stable-diffusion" webui-docker-invoke-1 | >> GFPGAN Initialized webui-docker-invoke-1 | >> CodeFormer Initialized webui-docker-invoke-1 | >> ESRGAN Initialized webui-docker-invoke-1 | >> Using device_type cpu webui-docker-invoke-1 | >> xformers memory-efficient attention is available but disabled webui-docker-invoke-1 | >> NSFW checker is disabled webui-docker-invoke-1 | >> Current VRAM usage: 0.00G webui-docker-invoke-1 | >> Loading stable-diffusion-1.5 from /data/StableDiffusion/v1-5-pruned-emaonly.ckpt webui-docker-invoke-1 | >> Scanning Model: stable-diffusion-1.5 webui-docker-invoke-1 | >> Model scanned ok
We can continue in #348