[Bug]: Png Info -> Send to txt2img fails when picture data contains Token merging ratio
Is there an existing issue for this?
- [x] I have searched the existing issues and checked the recent builds/commits
What happened?
See title, that's it.
Steps to reproduce the problem
- Make picture with token merging enabled
- Import picture into Png Info
- click Send to txt2img
What should have happened?
I prefer if it had worked :)
Commit where the problem happens
1.3.0 and earlier.
What Python version are you running on ?
Python 3.10.x
What platforms do you use to access the UI ?
Windows
What device are you running WebUI on?
Nvidia GPUs (RTX 20 above)
What browsers do you use to access the UI ?
Mozilla Firefox
Command Line Arguments
No
List of extensions
No
Console logs
venv "C:\sd\venv\Scripts\Python.exe"
Python 3.10.6 (tags/v3.10.6:9c7b4bd, Aug 1 2022, 21:53:49) [MSC v.1932 64 bit (AMD64)]
Version: v1.2.1-162-ga6b618d0
Commit hash: a6b618d07248267de36f0e8f4a847d997285e272
Installing requirements
Launching Web UI with arguments:
No module 'xformers'. Proceeding without it.
Loading weights [6ce0161689] from C:\sd\models\Stable-diffusion\v1-5-pruned-emaonly.safetensors
Running on local URL: http://127.0.0.1:7860
To create a public link, set `share=True` in `launch()`.
Startup time: 7.3s (import torch: 2.0s, import gradio: 1.4s, import ldm: 0.5s, other imports: 1.2s, load scripts: 1.2s, create ui: 0.5s, gradio launch: 0.3s).
Creating model from config: C:\sd\configs\v1-inference.yaml
LatentDiffusion: Running in eps-prediction mode
DiffusionWrapper has 859.52 M params.
Applying optimization: sdp-no-mem... done.
Textual inversion embeddings loaded(0):
Model loaded in 5.3s (load weights from disk: 0.9s, create model: 0.6s, apply weights to model: 1.0s, apply half(): 0.8s, move model to device: 0.7s, load textual inversion embeddings: 1.3s).
Traceback (most recent call last):
File "C:\sd\venv\lib\site-packages\gradio\routes.py", line 414, in run_predict
output = await app.get_blocks().process_api(
File "C:\sd\venv\lib\site-packages\gradio\blocks.py", line 1323, in process_api
result = await self.call_function(
File "C:\sd\venv\lib\site-packages\gradio\blocks.py", line 1051, in call_function
prediction = await anyio.to_thread.run_sync(
File "C:\sd\venv\lib\site-packages\anyio\to_thread.py", line 31, in run_sync
return await get_asynclib().run_sync_in_worker_thread(
File "C:\sd\venv\lib\site-packages\anyio\_backends\_asyncio.py", line 937, in run_sync_in_worker_thread
return await future
File "C:\sd\venv\lib\site-packages\anyio\_backends\_asyncio.py", line 867, in run
result = context.run(func, *args)
File "C:\sd\modules\generation_parameters_copypaste.py", line 379, in paste_func
v = key(params)
File "C:\sd\modules\generation_parameters_copypaste.py", line 414, in paste_settings
v = shared.opts.cast_value(setting_name, v)
File "C:\sd\modules\shared.py", line 688, in cast_value
value = expected_type(value)
ValueError: invalid literal for int() with base 10: '0.6'
Additional information
I thought someone would have reported this by now, so I didn't report it earlier lol
It might also have been Negative Guidance minimum sigma. I have both set to the same value, and am too lazy to check which of them causes this.
Odd, I have token merging and been using it for weeks yet I’ve not experience the issue you are having. With that said, I only have my main ratio set to 0.3 with everything else off or on default.
I have the same issue, it used to happen then fix itself until it just stopped working completely.
this is what webui is showing Traceback (most recent call last): File "C:\Users\maybe\webui\stable-diffusion-webui\venv\lib\site-packages\gradio\routes.py", line 414, in run_predict output = await app.get_blocks().process_api( File "C:\Users\maybe\webui\stable-diffusion-webui\venv\lib\site-packages\gradio\blocks.py", line 1323, in process_api result = await self.call_function( File "C:\Users\maybe\webui\stable-diffusion-webui\venv\lib\site-packages\gradio\blocks.py", line 1051, in call_function prediction = await anyio.to_thread.run_sync( File "C:\Users\maybe\webui\stable-diffusion-webui\venv\lib\site-packages\anyio\to_thread.py", line 31, in run_sync return await get_asynclib().run_sync_in_worker_thread( File "C:\Users\maybe\webui\stable-diffusion-webui\venv\lib\site-packages\anyio_backends_asyncio.py", line 937, in run_sync_in_worker_thread return await future File "C:\Users\maybe\webui\stable-diffusion-webui\venv\lib\site-packages\anyio_backends_asyncio.py", line 867, in run result = context.run(func, *args) File "C:\Users\maybe\webui\stable-diffusion-webui\modules\generation_parameters_copypaste.py", line 379, in paste_func v = key(params) File "C:\Users\maybe\webui\stable-diffusion-webui\modules\generation_parameters_copypaste.py", line 414, in paste_settings v = shared.opts.cast_value(setting_name, v) File "C:\Users\maybe\webui\stable-diffusion-webui\modules\shared.py", line 688, in cast_value value = expected_type(value) ValueError: invalid literal for int() with base 10: '0.5'
I've fixed it with some code:
open /modules/shared.py and replace this at line 678:
def cast_value(self, key, value): """casts an arbitrary to the same type as this setting's value with key Example: cast_value("eta_noise_seed_delta", "12") -> returns 12 (an int rather than str) """
if value is None:
return None
default_value = self.data_labels[key].default
if default_value is None:
default_value = getattr(self, key, None)
if default_value is None:
return None
expected_type = type(default_value)
if expected_type == bool and value == "False":
value = False
else:
value = expected_type(value)
return value
with this:
def cast_value(self, key, value): """casts an arbitrary to the same type as this setting's value with key Example: cast_value("eta_noise_seed_delta", "12") -> returns 12 (an int rather than str) """
if value is None:
return None
default_value = self.data_labels[key].default
if default_value is None:
default_value = getattr(self, key, None)
if default_value is None:
return None
expected_type = type(default_value)
if expected_type == bool and value == "False":
value = False
else:
try:
value = float(value)
if expected_type == int:
value = int(value)
except ValueError:
# Handle error here
pass
return value
same issues anytime click on "Read generation parameters from prompt or last generation if prompt is empty into user interface."
Traceback (most recent call last):█████████████████████████████████████████████████████| 25/25 [01:09<00:00, 2.21s/it] File "D:\AI_TOOL\A1111\stable-diffusion-webui\venv\lib\site-packages\gradio\routes.py", line 414, in run_predict output = await app.get_blocks().process_api( File "D:\AI_TOOL\A1111\stable-diffusion-webui\venv\lib\site-packages\gradio\blocks.py", line 1323, in process_api result = await self.call_function( File "D:\AI_TOOL\A1111\stable-diffusion-webui\venv\lib\site-packages\gradio\blocks.py", line 1051, in call_function prediction = await anyio.to_thread.run_sync( File "D:\AI_TOOL\A1111\stable-diffusion-webui\venv\lib\site-packages\anyio\to_thread.py", line 33, in run_sync return await get_asynclib().run_sync_in_worker_thread( File "D:\AI_TOOL\A1111\stable-diffusion-webui\venv\lib\site-packages\anyio_backends_asyncio.py", line 877, in run_sync_in_worker_thread return await future File "D:\AI_TOOL\A1111\stable-diffusion-webui\venv\lib\site-packages\anyio_backends_asyncio.py", line 807, in run result = context.run(func, *args) File "D:\AI_TOOL\A1111\stable-diffusion-webui\modules\generation_parameters_copypaste.py", line 379, in paste_func v = key(params) File "D:\AI_TOOL\A1111\stable-diffusion-webui\modules\generation_parameters_copypaste.py", line 414, in paste_settings v = shared.opts.cast_value(setting_name, v) File "D:\AI_TOOL\A1111\stable-diffusion-webui\modules\shared.py", line 688, in cast_value value = expected_type(value) ValueError: invalid literal for int() with base 10: '0.6'
same problem