stable-diffusion-webui icon indicating copy to clipboard operation
stable-diffusion-webui copied to clipboard

[Bug]: Error with runtime: json.decoder.JSONDecodeError: Unterminated string

Open CosmicMagical opened this issue 2 years ago • 4 comments

Is there an existing issue for this?

  • [X] I have searched the existing issues and checked the recent builds/commits

What happened?

I did everything on the guide and got this error after running the webui-user.bat file, After DiffusionWrapper has 859.52 M param i get those errors.

Already up to date. venv "C:\Users\HOME\stable-diffusion-webui\venv\Scripts\Python.exe" Python 3.10.8 (tags/v3.10.8:aaaf517, Oct 11 2022, 16:50:30) [MSC v.1933 64 bit (AMD64)] Commit hash: 685f9631b56ff8bd43bce24ff5ce0f9a0e9af490 Installing requirements for Web UI Launching Web UI with arguments: --lowvram --precision full --no-half Warning: caught exception 'Found no NVIDIA driver on your system. Please check that you have an NVIDIA GPU and installed a driver from http://www.nvidia.com/Download/index.aspx', memory monitor disabled No module 'xformers'. Proceeding without it. LatentDiffusion: Running in eps-prediction mode DiffusionWrapper has 859.52 M params. Traceback (most recent call last): File "C:\Users\HOME\stable-diffusion-webui\launch.py", line 295, in start() File "C:\Users\HOME\stable-diffusion-webui\launch.py", line 290, in start webui.webui() File "C:\Users\HOME\stable-diffusion-webui\webui.py", line 132, in webui initialize() File "C:\Users\HOME\stable-diffusion-webui\webui.py", line 62, in initialize modules.sd_models.load_model() File "C:\Users\HOME\stable-diffusion-webui\modules\sd_models.py", line 308, in load_model sd_model = instantiate_from_config(sd_config.model) File "C:\Users\HOME\stable-diffusion-webui\repositories\stable-diffusion-stability-ai\ldm\util.py", line 79, in instantiate_from_config return get_obj_from_str(config["target"])(**config.get("params", dict())) File "C:\Users\HOME\stable-diffusion-webui\repositories\stable-diffusion-stability-ai\ldm\models\diffusion\ddpm.py", line 563, in init self.instantiate_cond_stage(cond_stage_config) File "C:\Users\HOME\stable-diffusion-webui\repositories\stable-diffusion-stability-ai\ldm\models\diffusion\ddpm.py", line 630, in instantiate_cond_stage model = instantiate_from_config(config) File "C:\Users\HOME\stable-diffusion-webui\repositories\stable-diffusion-stability-ai\ldm\util.py", line 79, in instantiate_from_config return get_obj_from_str(config["target"])(**config.get("params", dict())) File "C:\Users\HOME\stable-diffusion-webui\repositories\stable-diffusion-stability-ai\ldm\modules\encoders\modules.py", line 99, in init self.tokenizer = CLIPTokenizer.from_pretrained(version) File "C:\Users\HOME\stable-diffusion-webui\venv\lib\site-packages\transformers\tokenization_utils_base.py", line 1784, in from_pretrained return cls.from_pretrained( File "C:\Users\HOME\stable-diffusion-webui\venv\lib\site-packages\transformers\tokenization_utils_base.py", line 1929, in from_pretrained tokenizer = cls(*init_inputs, **init_kwargs) File "C:\Users\HOME\stable-diffusion-webui\venv\lib\site-packages\transformers\models\clip\tokenization_clip.py", line 163, in init self.encoder = json.load(vocab_handle) File "C:\Users\HOME\AppData\Local\Programs\Python\Python310\lib\json_init.py", line 293, in load return loads(fp.read(), File "C:\Users\HOME\AppData\Local\Programs\Python\Python310\lib\json_init.py", line 346, in loads return _default_decoder.decode(s) File "C:\Users\HOME\AppData\Local\Programs\Python\Python310\lib\json\decoder.py", line 337, in decode obj, end = self.raw_decode(s, idx=_w(s, 0).end()) File "C:\Users\HOME\AppData\Local\Programs\Python\Python310\lib\json\decoder.py", line 353, in raw_decode obj, end = self.scan_once(s, idx) json.decoder.JSONDecodeError: Unterminated string starting at: line 1 column 749998 (char 749997)

Steps to reproduce the problem

  1. Go to stable-diffusion-webui
  2. Launch webui_user.bat
  3. Wait for it to finish.

What should have happened?

Getting the Local IP to use the stable diffusion.

Commit where the problem happens

685f9631b56ff8bd43bce24ff5ce0f9a0e9af490

What platforms do you use to access UI ?

Windows

What browsers do you use to access the UI ?

Google Chrome

Command Line Arguments

@echo off

set PYTHON=
set GIT=
set VENV_DIR=
set COMMANDLINE_ARGS= --lowvram --precision full --no-half --skip-torch-cuda-test
git pull
call webui.bat

Additional information, context and logs

Already up to date. venv "C:\Users\HOME\stable-diffusion-webui\venv\Scripts\Python.exe" Python 3.10.8 (tags/v3.10.8:aaaf517, Oct 11 2022, 16:50:30) [MSC v.1933 64 bit (AMD64)] Commit hash: 685f9631b56ff8bd43bce24ff5ce0f9a0e9af490 Installing requirements for Web UI Launching Web UI with arguments: --lowvram --precision full --no-half Warning: caught exception 'Found no NVIDIA driver on your system. Please check that you have an NVIDIA GPU and installed a driver from http://www.nvidia.com/Download/index.aspx', memory monitor disabled No module 'xformers'. Proceeding without it. LatentDiffusion: Running in eps-prediction mode DiffusionWrapper has 859.52 M params. Traceback (most recent call last): File "C:\Users\HOME\stable-diffusion-webui\launch.py", line 295, in start() File "C:\Users\HOME\stable-diffusion-webui\launch.py", line 290, in start webui.webui() File "C:\Users\HOME\stable-diffusion-webui\webui.py", line 132, in webui initialize() File "C:\Users\HOME\stable-diffusion-webui\webui.py", line 62, in initialize modules.sd_models.load_model() File "C:\Users\HOME\stable-diffusion-webui\modules\sd_models.py", line 308, in load_model sd_model = instantiate_from_config(sd_config.model) File "C:\Users\HOME\stable-diffusion-webui\repositories\stable-diffusion-stability-ai\ldm\util.py", line 79, in instantiate_from_config return get_obj_from_str(config["target"])(**config.get("params", dict())) File "C:\Users\HOME\stable-diffusion-webui\repositories\stable-diffusion-stability-ai\ldm\models\diffusion\ddpm.py", line 563, in init self.instantiate_cond_stage(cond_stage_config) File "C:\Users\HOME\stable-diffusion-webui\repositories\stable-diffusion-stability-ai\ldm\models\diffusion\ddpm.py", line 630, in instantiate_cond_stage model = instantiate_from_config(config) File "C:\Users\HOME\stable-diffusion-webui\repositories\stable-diffusion-stability-ai\ldm\util.py", line 79, in instantiate_from_config return get_obj_from_str(config["target"])(**config.get("params", dict())) File "C:\Users\HOME\stable-diffusion-webui\repositories\stable-diffusion-stability-ai\ldm\modules\encoders\modules.py", line 99, in init self.tokenizer = CLIPTokenizer.from_pretrained(version) File "C:\Users\HOME\stable-diffusion-webui\venv\lib\site-packages\transformers\tokenization_utils_base.py", line 1784, in from_pretrained return cls.from_pretrained( File "C:\Users\HOME\stable-diffusion-webui\venv\lib\site-packages\transformers\tokenization_utils_base.py", line 1929, in from_pretrained tokenizer = cls(*init_inputs, **init_kwargs) File "C:\Users\HOME\stable-diffusion-webui\venv\lib\site-packages\transformers\models\clip\tokenization_clip.py", line 163, in init self.encoder = json.load(vocab_handle) File "C:\Users\HOME\AppData\Local\Programs\Python\Python310\lib\json_init.py", line 293, in load return loads(fp.read(), File "C:\Users\HOME\AppData\Local\Programs\Python\Python310\lib\json_init.py", line 346, in loads return _default_decoder.decode(s) File "C:\Users\HOME\AppData\Local\Programs\Python\Python310\lib\json\decoder.py", line 337, in decode obj, end = self.raw_decode(s, idx=_w(s, 0).end()) File "C:\Users\HOME\AppData\Local\Programs\Python\Python310\lib\json\decoder.py", line 353, in raw_decode obj, end = self.scan_once(s, idx) json.decoder.JSONDecodeError: Unterminated string starting at: line 1 column 749998 (char 749997)

CosmicMagical avatar Dec 12 '22 23:12 CosmicMagical

what guide are you using? the one in the main readme page? what gpu u use? --skip-torch-cuda-test i think this is causing you issue, try disabling it

ClashSAN avatar Dec 12 '22 23:12 ClashSAN

@ClashSAN

what guide are you using? the one in the main readme page? what gpu u use? --skip-torch-cuda-test i think this is causing you issue, try disabling it

  • I was using this tutorial: https://www.youtube.com/watch?v=DHaL56P6f5M&ab_channel=SebastianKamph
  • My gpu is Intel(R) HD Graphics 520
  • And this is what happens when i disable --skip-torch-cuda-test:

Already up to date. venv "C:\Users\HOME\stable-diffusion-webui\venv\Scripts\Python.exe" Python 3.10.8 (tags/v3.10.8:aaaf517, Oct 11 2022, 16:50:30) [MSC v.1933 64 bit (AMD64)] Commit hash: 685f9631b56ff8bd43bce24ff5ce0f9a0e9af490 Traceback (most recent call last): File "C:\Users\HOME\stable-diffusion-webui\launch.py", line 294, in prepare_environment() File "C:\Users\HOME\stable-diffusion-webui\launch.py", line 209, in prepare_environment run_python("import torch; assert torch.cuda.is_available(), 'Torch is not able to use GPU; add --skip-torch-cuda-test to COMMANDLINE_ARGS variable to disable this check'") File "C:\Users\HOME\stable-diffusion-webui\launch.py", line 73, in run_python return run(f'"{python}" -c "{code}"', desc, errdesc) File "C:\Users\HOME\stable-diffusion-webui\launch.py", line 49, in run raise RuntimeError(message) RuntimeError: Error running command. Command: "C:\Users\HOME\stable-diffusion-webui\venv\Scripts\python.exe" -c "import torch; assert torch.cuda.is_available(), 'Torch is not able to use GPU; add --skip-torch-cuda-test to COMMANDLINE_ARGS variable to disable this check'" Error code: 1 stdout: stderr: Traceback (most recent call last): File "", line 1, in AssertionError: Torch is not able to use GPU; add --skip-torch-cuda-test to COMMANDLINE_ARGS variable to disable this check

CosmicMagical avatar Dec 13 '22 00:12 CosmicMagical

This webui and other prominent webui's use nvidia gpus with at least 4gb vram. your integrated intel gpu will end up not being utilized, and running in cpu mode will be extremely slow. I recommend another option, Onnx diffusers UI running in --cpu-only.

But if you are attempting this anyway, run with --use-cpu all --skip-cuda-torch-test --precision full --no-half

ClashSAN avatar Dec 13 '22 00:12 ClashSAN

I also encountered the same problem, I solved it like this:

Encountering this kind of problem is basically a problem with the data file, which leads to json parsing errors, so you need to find this file.

As can be seen from the error report, it is the JSONDecodeError that occurs when loading the vocabulary file in the init method in the tokenization_clip.py file. Then find this piece of code, and finally determine that there is an error here:

        with open(vocab_file, encoding="utf-8") as vocab_handle:
            self.encoder = json.load(vocab_handle)

I always couldn't find this vocab.json file in the project at the beginning. So print vocab_file, get the path of the json file, and found that it is not in the project, but in the C drive, so it is.

my vocab.json path: C:\Users\xxx\.cache\huggingface\hub\models--openai--clip-vit-large-patch14\snapshots\8d052a0f05efbaefbc9e8786ba291cfdf93e5bff\vocab.json

I found this file in the C drive, it is a shortcut, continue to find its source file, and open it.

file link to: C:\Users\xxx\.cache\huggingface\hub\models--openai--clip-vit-large-patch14\blobs\4297ea6a8d2bae1fea8f48b45e257814dcb11f69

It was found that the end was missing, and the total number of words was only 267716.

...rium</w<":15063, "quis": 15064, "re

Yes, my error was

...json.decoder.JSONDecodeError: Unterminated string starting at: line 1 column 267713 (char 267712).

I found the download address of the vocab.json file at the forefront of the tokenization_clip.py file: vocab.json

https://huggingface.co/openai/clip-vit-base-patch32/resolve/main/vocab.json

Manually downloaded the file and found that it has a total word count of 852694.

  • So obviously, the vocab.json file downloaded by the project is corrupted.
  • Manually copied the content fixes, then restarted the project and it started successfully.
  • Copy/Move vocab.json -> C:...\4297ea6a8d2bae1fea8f48b45e257814dcb11f69 <=> C:...\vocab.json

Hades-Su avatar Feb 20 '23 17:02 Hades-Su