stable-diffusion-webui
stable-diffusion-webui copied to clipboard
[Bug]: Error with runtime: json.decoder.JSONDecodeError: Unterminated string
Is there an existing issue for this?
- [X] I have searched the existing issues and checked the recent builds/commits
What happened?
I did everything on the guide and got this error after running the webui-user.bat file, After DiffusionWrapper has 859.52 M param i get those errors.
Already up to date.
venv "C:\Users\HOME\stable-diffusion-webui\venv\Scripts\Python.exe"
Python 3.10.8 (tags/v3.10.8:aaaf517, Oct 11 2022, 16:50:30) [MSC v.1933 64 bit (AMD64)]
Commit hash: 685f9631b56ff8bd43bce24ff5ce0f9a0e9af490
Installing requirements for Web UI
Launching Web UI with arguments: --lowvram --precision full --no-half
Warning: caught exception 'Found no NVIDIA driver on your system. Please check that you have an NVIDIA GPU and installed a driver from http://www.nvidia.com/Download/index.aspx', memory monitor disabled
No module 'xformers'. Proceeding without it.
LatentDiffusion: Running in eps-prediction mode
DiffusionWrapper has 859.52 M params.
Traceback (most recent call last):
File "C:\Users\HOME\stable-diffusion-webui\launch.py", line 295, in
Steps to reproduce the problem
- Go to stable-diffusion-webui
- Launch webui_user.bat
- Wait for it to finish.
What should have happened?
Getting the Local IP to use the stable diffusion.
Commit where the problem happens
685f9631b56ff8bd43bce24ff5ce0f9a0e9af490
What platforms do you use to access UI ?
Windows
What browsers do you use to access the UI ?
Google Chrome
Command Line Arguments
@echo off
set PYTHON=
set GIT=
set VENV_DIR=
set COMMANDLINE_ARGS= --lowvram --precision full --no-half --skip-torch-cuda-test
git pull
call webui.bat
Additional information, context and logs
Already up to date.
venv "C:\Users\HOME\stable-diffusion-webui\venv\Scripts\Python.exe"
Python 3.10.8 (tags/v3.10.8:aaaf517, Oct 11 2022, 16:50:30) [MSC v.1933 64 bit (AMD64)]
Commit hash: 685f9631b56ff8bd43bce24ff5ce0f9a0e9af490
Installing requirements for Web UI
Launching Web UI with arguments: --lowvram --precision full --no-half
Warning: caught exception 'Found no NVIDIA driver on your system. Please check that you have an NVIDIA GPU and installed a driver from http://www.nvidia.com/Download/index.aspx', memory monitor disabled
No module 'xformers'. Proceeding without it.
LatentDiffusion: Running in eps-prediction mode
DiffusionWrapper has 859.52 M params.
Traceback (most recent call last):
File "C:\Users\HOME\stable-diffusion-webui\launch.py", line 295, in
what guide are you using? the one in the main readme page? what gpu u use? --skip-torch-cuda-test
i think this is causing you issue, try disabling it
@ClashSAN
what guide are you using? the one in the main readme page? what gpu u use?
--skip-torch-cuda-test
i think this is causing you issue, try disabling it
- I was using this tutorial: https://www.youtube.com/watch?v=DHaL56P6f5M&ab_channel=SebastianKamph
- My gpu is Intel(R) HD Graphics 520
- And this is what happens when i disable --skip-torch-cuda-test:
Already up to date.
venv "C:\Users\HOME\stable-diffusion-webui\venv\Scripts\Python.exe"
Python 3.10.8 (tags/v3.10.8:aaaf517, Oct 11 2022, 16:50:30) [MSC v.1933 64 bit (AMD64)]
Commit hash: 685f9631b56ff8bd43bce24ff5ce0f9a0e9af490
Traceback (most recent call last):
File "C:\Users\HOME\stable-diffusion-webui\launch.py", line 294, in
This webui and other prominent webui's use nvidia gpus with at least 4gb vram. your integrated intel gpu will end up not being utilized, and running in cpu mode will be extremely slow. I recommend another option, Onnx diffusers UI running in --cpu-only.
But if you are attempting this anyway, run with --use-cpu all --skip-cuda-torch-test --precision full --no-half
I also encountered the same problem, I solved it like this:
Encountering this kind of problem is basically a problem with the data file, which leads to json parsing errors, so you need to find this file.
As can be seen from the error report, it is the JSONDecodeError that occurs when loading the vocabulary file in the init method in the tokenization_clip.py file. Then find this piece of code, and finally determine that there is an error here:
with open(vocab_file, encoding="utf-8") as vocab_handle:
self.encoder = json.load(vocab_handle)
I always couldn't find this vocab.json file in the project at the beginning. So print vocab_file, get the path of the json file, and found that it is not in the project, but in the C drive, so it is.
my vocab.json path: C:\Users\xxx\.cache\huggingface\hub\models--openai--clip-vit-large-patch14\snapshots\8d052a0f05efbaefbc9e8786ba291cfdf93e5bff\vocab.json
I found this file in the C drive, it is a shortcut, continue to find its source file, and open it.
file link to: C:\Users\xxx\.cache\huggingface\hub\models--openai--clip-vit-large-patch14\blobs\4297ea6a8d2bae1fea8f48b45e257814dcb11f69
It was found that the end was missing, and the total number of words was only 267716.
...rium</w<":15063, "quis": 15064, "re
Yes, my error was
...json.decoder.JSONDecodeError: Unterminated string starting at: line 1 column 267713 (char 267712).
I found the download address of the vocab.json file at the forefront of the tokenization_clip.py file: vocab.json
https://huggingface.co/openai/clip-vit-base-patch32/resolve/main/vocab.json
Manually downloaded the file and found that it has a total word count of 852694.
- So obviously, the vocab.json file downloaded by the project is corrupted.
- Manually copied the content fixes, then restarted the project and it started successfully.
- Copy/Move vocab.json -> C:...\4297ea6a8d2bae1fea8f48b45e257814dcb11f69 <=> C:...\vocab.json