stable-diffusion-webui
stable-diffusion-webui copied to clipboard
[Bug]: JSONDecodeError
Is there an existing issue for this?
- [x] I have searched the existing issues and checked the recent builds/commits
What happened?
When running webui-user.bat on my Windows Laptop, I keep getting a JSONDecodeError
I have the webui running just fine on my Windows Desktop, both using the exact same version of Python (3.10.6)
Steps to reproduce the problem
Running a first time setup of the webui-user.bat file or opening again after venv / repositories setup.
What should have happened?
Web ui is normally loaded fine
Commit where the problem happens
17a2076f72562b428052ee3fc8c43d19c03ecd1e
What platforms do you use to access UI ?
Windows
What browsers do you use to access the UI ?
No response
Command Line Arguments
No response
Additional information, context and logs
venv "C:\stable-diffusion-webui\venv\Scripts\Python.exe"
Python 3.10.6 (tags/v3.10.6:9c7b4bd, Aug 1 2022, 21:53:49) [MSC v.1932 64 bit (AMD64)]
Commit hash: 17a2076f72562b428052ee3fc8c43d19c03ecd1e
Cloning Stable Diffusion into repositories\stable-diffusion...
Cloning Taming Transformers into repositories\taming-transformers...
Cloning K-diffusion into repositories\k-diffusion...
Cloning CodeFormer into repositories\CodeFormer...
Cloning BLIP into repositories\BLIP...
Installing requirements for Web UI
Launching Web UI with arguments:
LatentDiffusion: Running in eps-prediction mode
DiffusionWrapper has 859.52 M params.
making attention of type 'vanilla' with 512 in_channels
Working with z of shape (1, 4, 32, 32) = 4096 dimensions.
making attention of type 'vanilla' with 512 in_channels
Traceback (most recent call last):
File "C:\stable-diffusion-webui\launch.py", line 228, in
I also encountered the same problem, I solved it like this:
Encountering this kind of problem is basically a problem with the data file, which leads to json parsing errors, so you need to find this file.
As can be seen from the error report, it is the JSONDecodeError that occurs when loading the vocabulary file in the init method in the tokenization_clip.py file. Then find this piece of code, and finally determine that there is an error here:
with open(vocab_file, encoding="utf-8") as vocab_handle:
self.encoder = json.load(vocab_handle)
I always couldn't find this vocab.json file in the project at the beginning. So print vocab_file, get the path of the json file, and found that it is not in the project, but in the C drive, so it is.
my vocab.json path: C:\Users\xxx\.cache\huggingface\hub\models--openai--clip-vit-large-patch14\snapshots\8d052a0f05efbaefbc9e8786ba291cfdf93e5bff\vocab.json
I found this file in the C drive, it is a shortcut, continue to find its source file, and open it.
file link to: C:\Users\xxx\.cache\huggingface\hub\models--openai--clip-vit-large-patch14\blobs\4297ea6a8d2bae1fea8f48b45e257814dcb11f69
It was found that the end was missing, and the total number of words was only 267716.
...rium</w<":15063, "quis": 15064, "re
Yes, my error was
...json.decoder.JSONDecodeError: Unterminated string starting at: line 1 column 267713 (char 267712).
I found the download address of the vocab.json file at the forefront of the tokenization_clip.py file: vocab.json
https://huggingface.co/openai/clip-vit-base-patch32/resolve/main/vocab.json
Manually downloaded the file and found that it has a total word count of 852694.
- So obviously, the vocab.json file downloaded by the project is corrupted.
- Manually copied the content fixes, then restarted the project and it started successfully.
- Copy/Move vocab.json -> C:...\4297ea6a8d2bae1fea8f48b45e257814dcb11f69 <=> C:...\vocab.json
there's a way to help debug. Put print() where error happened. Like in .\modules\shared.py
def load(self, filename):
with open(filename, "r", encoding="utf8") as file:
print(filename)
self.data = json.load(file)
Run the webui.bat again to check the name of json file. Delete it and rerun the webui.bat. Then you will be able to access.
Every time I try to generate I get the json error:
################################################################ Install script for stable-diffusion + Web UI Tested on Debian 11 (Bullseye) ################################################################
################################################################ Running on iwoolf user ################################################################
################################################################ Repo already cloned, using it as install directory ################################################################
################################################################ Create and activate python venv ################################################################
################################################################ Accelerating launch.py... ################################################################ Using TCMalloc: libtcmalloc_minimal.so.4 Python 3.9.12 (main, Apr 5 2022, 06:56:58) [GCC 7.5.0] Version: v1.5.1 Commit hash: 68f336bd994bed5442ad95bad6b6ad5564a5409a Installing requirements
Installing sd-webui-xl-demo requirements_webui.txt
Installing requirements for scikit_learn
If submitting an issue on github, please provide the full startup log for debugging purposes.
Initializing Dreambooth Dreambooth revision: c2a5617c587b812b5a408143ddfb18fc49234edf Successfully installed accelerate-0.19.0 diffusers-0.16.1 fastapi-0.94.1 gitpython-3.1.32 transformers-4.30.2 [+] xformers version 0.0.20 installed. [+] torch version 2.0.1+cu118 installed. [+] torchvision version 0.15.2+cu118 installed. [+] accelerate version 0.19.0 installed. [+] diffusers version 0.16.1 installed. [+] transformers version 4.30.2 installed. [+] bitsandbytes version 0.35.4 installed.
Launching Web UI with arguments: --ckpt-dir /media/iwoolf/tenT/SDModels --xformers --medvram --share
[-] ADetailer initialized. version: 23.7.8, num models: 9
[AddNet] Updating model hashes...
0it [00:00, ?it/s]
[AddNet] Updating model hashes...
0it [00:00, ?it/s]
2023-08-06 00:15:09,496 - ControlNet - INFO - ControlNet v1.1.233
ControlNet preprocessor location: /media/iwoolf/BigDrive/stable-diffusion-webui/extensions/sd-webui-controlnet/annotator/downloads
2023-08-06 00:15:09,976 - ControlNet - INFO - ControlNet v1.1.233
sd-webui-prompt-all-in-one background API service started successfully.
*** Error loading script: sd_webui_xldemo_txt2img.py
Traceback (most recent call last):
File "/media/iwoolf/BigDrive/stable-diffusion-webui/modules/scripts.py", line 319, in load_scripts
script_module = script_loading.load_module(scriptfile.path)
File "/media/iwoolf/BigDrive/stable-diffusion-webui/modules/script_loading.py", line 10, in load_module
module_spec.loader.exec_module(module)
File "
Loading weights [e6415c4892] from /media/iwoolf/tenT/SDModels/Realistic_Vision_V2.0.safetensors Creating model from config: /media/iwoolf/BigDrive/stable-diffusion-webui/configs/v1-inference.yaml LatentDiffusion: Running in eps-prediction mode DiffusionWrapper has 859.52 M params. Model loaded in 35.0s (load weights from disk: 7.9s, load config: 0.3s, create model: 1.2s, apply weights to model: 20.5s, apply half(): 1.2s, load VAE: 0.1s, load textual inversion embeddings: 0.5s, calculate empty prompt: 3.3s). Applying attention optimization: xformers... done. [AddNet] Updating model hashes... 0it [00:00, ?it/s] [AddNet] Updating model hashes... 0it [00:00, ?it/s] Exception in thread Thread-12: Traceback (most recent call last): File "/media/iwoolf/BigDrive/anaconda3/lib/python3.9/threading.py", line 973, in _bootstrap_inner self.run() File "/media/iwoolf/BigDrive/anaconda3/lib/python3.9/threading.py", line 910, in run self._target(*self._args, **self._kwargs) File "/media/iwoolf/BigDrive/stable-diffusion-webui/modules/devices.py", line 171, in first_time_calculation conv2d(x) File "/media/iwoolf/BigDrive/stable-diffusion-webui/venv/lib/python3.9/site-packages/torch/nn/modules/module.py", line 1501, in _call_impl return forward_call(*args, **kwargs) File "/media/iwoolf/BigDrive/stable-diffusion-webui/extensions-builtin/Lora/networks.py", line 376, in network_Conv2d_forward return torch.nn.Conv2d_forward_before_network(self, input) File "/media/iwoolf/BigDrive/stable-diffusion-webui/extensions/a1111-sd-webui-lycoris/lycoris.py", line 753, in lyco_Conv2d_forward return torch.nn.Conv2d_forward_before_lyco(self, input) File "/media/iwoolf/BigDrive/stable-diffusion-webui/venv/lib/python3.9/site-packages/torch/nn/modules/conv.py", line 463, in forward return self._conv_forward(input, self.weight, self.bias) File "/media/iwoolf/BigDrive/stable-diffusion-webui/venv/lib/python3.9/site-packages/torch/nn/modules/conv.py", line 459, in _conv_forward return F.conv2d(input, weight, bias, self.stride, RuntimeError: GET was unable to find an engine to execute this computation Running on local URL: http://127.0.0.1:7860 Running on public URL: https://67b7118d802b38e3be.gradio.live
This share link expires in 72 hours. For free permanent hosting and GPU upgrades (NEW!), check out Spaces: https://huggingface.co/spaces Startup time: 247.5s (launcher: 143.5s, import torch: 4.7s, import gradio: 1.8s, setup paths: 1.9s, other imports: 3.4s, setup codeformer: 1.0s, list SD models: 3.4s, load scripts: 61.7s, scripts before_ui_callback: 0.1s, create ui: 14.4s, gradio launch: 11.2s, app_started_callback: 0.2s). *** Error parsing JSON generation info: task(75k79zadxr2rh76) *** Error parsing JSON generation info: task(oxti19a0ktoya6p) Traceback (most recent call last): File "/media/iwoolf/BigDrive/stable-diffusion-webui/venv/lib/python3.9/site-packages/gradio/routes.py", line 422, in run_predict output = await app.get_blocks().process_api( File "/media/iwoolf/BigDrive/stable-diffusion-webui/venv/lib/python3.9/site-packages/gradio/blocks.py", line 1321, in process_api inputs = self.preprocess_data(fn_index, inputs, state) File "/media/iwoolf/BigDrive/stable-diffusion-webui/venv/lib/python3.9/site-packages/gradio/blocks.py", line 1159, in preprocess_data self.validate_inputs(fn_index, inputs) File "/media/iwoolf/BigDrive/stable-diffusion-webui/venv/lib/python3.9/site-packages/gradio/blocks.py", line 1146, in validate_inputs raise ValueError( ValueError: An event handler (update_generation_info) didn't receive enough input values (needed: 3, got: 2). Check if the event handler calls a Javascript function, and make sure its return value is correct. Wanted inputs: [textbox, html, html] Received inputs: ["", -1]