text-generation-webui
text-generation-webui copied to clipboard
Server.py Won't Run KeyError: 'serialized_input'
Describe the bug
Update: For me, upgrading gradio with pip install -U gradio fixed the problem.
I'm running on Colab, and when I run server.py, it throws an error.
Is there an existing issue for this?
- [X] I have searched the existing issues
Reproduction
Run server.py on Colab.
Screenshot
No response
Logs
INFO:Loaded the model in 130.66 seconds.
INFO:Loading the extension "silero_tts"...
Using Silero TTS cached checkpoint found at /root/.cache/torch/hub
INFO:Loading the extension "gallery"...
╭───────────────────── Traceback (most recent call last) ──────────────────────╮
│ /content/GPTQ-for-LLaMa/server.py:930 in <module> │
│ │
│ 927 │ │ }) │
│ 928 │ │
│ 929 │ # Launch the web UI │
│ ❱ 930 │ create_interface() │
│ 931 │ while True: │
│ 932 │ │ time.sleep(0.5) │
│ 933 │ │ if shared.need_restart: │
│ │
│ /content/GPTQ-for-LLaMa/server.py:517 in create_interface │
│ │
│ 514 │ if shared.args.extensions is not None and len(shared.args.extensio │
│ 515 │ │ extensions_module.load_extensions() │
│ 516 │ │
│ ❱ 517 │ with gr.Blocks(css=ui.css if not shared.is_chat() else ui.css + ui │
│ 518 │ │ │
│ 519 │ │ # Create chat mode interface │
│ 520 │ │ if shared.is_chat(): │
│ │
│ /usr/local/lib/python3.10/dist-packages/gradio/blocks.py:1285 in __exit__ │
│ │
│ 1282 │ │ │ Context.root_block = None │
│ 1283 │ │ else: │
│ 1284 │ │ │ self.parent.children.extend(self.children) │
│ ❱ 1285 │ │ self.config = self.get_config_file() │
│ 1286 │ │ self.app = routes.App.create_app(self) │
│ 1287 │ │ self.progress_tracking = any(block_fn.tracks_progress for blo │
│ 1288 │ │ self.exited = True │
│ │
│ /usr/local/lib/python3.10/dist-packages/gradio/blocks.py:1261 in │
│ get_config_file │
│ │
│ 1258 │ │ │ │ assert isinstance(block, serializing.Serializable) │
│ 1259 │ │ │ │ block_config["serializer"] = serializer │
│ 1260 │ │ │ │ block_config["info"] = { │
│ ❱ 1261 │ │ │ │ │ "input": list(block.input_api_info()), # type: i │
│ 1262 │ │ │ │ │ "output": list(block.output_api_info()), # type: │
│ 1263 │ │ │ │ } │
│ 1264 │ │ │ config["components"].append(block_config) │
│ │
│ /usr/local/lib/python3.10/dist-packages/gradio_client/serializing.py:40 in │
│ input_api_info │
│ │
│ 37 │ # For backwards compatibility │
│ 38 │ def input_api_info(self) -> tuple[str, str]: │
│ 39 │ │ api_info = self.api_info() │
│ ❱ 40 │ │ return (api_info["serialized_input"][0], api_info["serialized_ │
│ 41 │ │
│ 42 │ # For backwards compatibility │
│ 43 │ def output_api_info(self) -> tuple[str, str]: │
╰──────────────────────────────────────────────────────────────────────────────╯
KeyError: 'serialized_input'
### System Info
```shell
Colab
same here with a fresh install on a brand new Debian 11 on prem server
Same here with fresh install on Windows:
Select the model that you want to download:
A) OPT 6.7B
B) OPT 2.7B
C) OPT 1.3B
D) OPT 350M
E) GALACTICA 6.7B
F) GALACTICA 1.3B
G) GALACTICA 125M
H) Pythia-6.9B-deduped
I) Pythia-2.8B-deduped
J) Pythia-1.4B-deduped
K) Pythia-410M-deduped
L) Manually specify a Hugging Face model
M) Do not download a model
Input> l
Then type the name of your desired Hugging Face model in the format organization/name.
Examples:
facebook/opt-1.3b
EleutherAI/pythia-1.4b-deduped
Input> TheBloke/vicuna-13B-1.1-GPTQ-4bit-128g
Downloading the model to models\TheBloke_vicuna-13B-1.1-GPTQ-4bit-128g
100%|█████████████████████████████████████████████████████████████████████████████████████████| 7.84k /7.84k 7.82MiB/s
100%|██████████████████████████████████████████████████████████████████████████████████████████| 576 /576 575kiB/s
100%|██████████████████████████████████████████████████████████████████████████████████████████| 131 /131 131kiB/s
100%|█████████████████████████████████████████████████████████████████████████████████████████| 57.0 /57.0 56.3kiB/s
100%|██████████████████████████████████████████████████████████████████████████████████████████| 411 /411 411kiB/s
100%|█████████████████████████████████████████████████████████████████████████████████████████| 500k /500k 9.44MiB/s
100%|██████████████████████████████████████████████████████████████████████████████████████████| 699 /699 698kiB/s
100%|█████████████████████████████████████████████████████████████████████████████████████████| 7.26G /7.26G 63.8MiB/s
INFO:Gradio HTTP request redirected to localhost :)
bin C:\Users\tomlo\Downloads\oobabooga_windows\oobabooga_windows\installer_files\env\lib\site-packages\bitsandbytes\libbitsandbytes_cuda117_nocublaslt.dll
INFO:Loading TheBloke_vicuna-13B-1.1-GPTQ-4bit-128g...
INFO:Found the following quantized model: models\TheBloke_vicuna-13B-1.1-GPTQ-4bit-128g\vicuna-13B-1.1-GPTQ-4bit-128g.latest.safetensors
INFO:Loaded the model in 7.37 seconds.
INFO:Loading the extension "gallery"...
Traceback (most recent call last):
File "C:\Users\tomlo\Downloads\oobabooga_windows\oobabooga_windows\text-generation-webui\server.py", line 930, in <module>
create_interface()
File "C:\Users\tomlo\Downloads\oobabooga_windows\oobabooga_windows\text-generation-webui\server.py", line 517, in create_interface
with gr.Blocks(css=ui.css if not shared.is_chat() else ui.css + ui.chat_css, analytics_enabled=False, title=title, theme=ui.theme) as shared.gradio['interface']:
File "C:\Users\tomlo\Downloads\oobabooga_windows\oobabooga_windows\installer_files\env\lib\site-packages\gradio\blocks.py", line 1285, in __exit__
self.config = self.get_config_file()
File "C:\Users\tomlo\Downloads\oobabooga_windows\oobabooga_windows\installer_files\env\lib\site-packages\gradio\blocks.py", line 1261, in get_config_file
"input": list(block.input_api_info()), # type: ignore
File "C:\Users\tomlo\Downloads\oobabooga_windows\oobabooga_windows\installer_files\env\lib\site-packages\gradio_client\serializing.py", line 40, in input_api_info
return (api_info["serialized_input"][0], api_info["serialized_input"][1])
KeyError: 'serialized_input'
Same here, on a clean install on Windows
I swear it was working an hour ago :) Something in the dependencies I guess?
I swear it was working an hour ago :) Something in the dependencies I guess?
Seemed to be. I have a build from the 3rd May 2023 and that works (although that was CPU not GPU) so something may have changed on the GPU side.
upgrading gradio to v3.28.3 worked for me
upgrading gradio to v3.28.3 worked for me
How did you do that?
pip install --force gradio==3.28.3
oobabooga_linux/installer_files/env/bin/python3.10 -m pip install "gradio==3.28.3"
force upgrading gradio worked for me
yeah didnt work for me.
Traceback (most recent call last):
File "H:\Personal\hobbies\T2A\oobabooga_windows\text-generation-webui\server.py", line 930, in
Done! Press any key to continue . . .
Yeah also didn't work for me:
Traceback (most recent call last):
File "C:\CodeBlocks\oobabooga_windows\text-generation-webui\server.py", line 930, in <module>
create_interface()
File "C:\CodeBlocks\oobabooga_windows\text-generation-webui\server.py", line 517, in create_interface
with gr.Blocks(css=ui.css if not shared.is_chat() else ui.css + ui.chat_css, analytics_enabled=False, title=title, theme=ui.theme) as shared.gradio['interface']:
File "C:\CodeBlocks\oobabooga_windows\installer_files\env\lib\site-packages\gradio\blocks.py", line 1285, in __exit__
self.config = self.get_config_file()
File "C:\CodeBlocks\oobabooga_windows\installer_files\env\lib\site-packages\gradio\blocks.py", line 1261, in get_config_file
"input": list(block.input_api_info()), # type: ignore
File "C:\CodeBlocks\oobabooga_windows\installer_files\env\lib\site-packages\gradio_client\serializing.py", line 40, in input_api_info
return (api_info["serialized_input"][0], api_info["serialized_input"][1])
KeyError: 'serialized_input'
Bumping gradio to 3.28.3 and reinstalling from scratch works for me (I haven't tried to force-upgrade).
upgrading gradio to v3.28.3 worked for me
yes ,works for me
So I'm not sure what I'm doing wrong. I updated gradio and verified that it's on the 3.28.3 version. I deleted the entire directory extracted the windows installer reran the start_windows.bat file, didn't get a single error, until I selected the model I wanted it to download, it finishes downloading, and it continues to give me the error.
INFO:Gradio HTTP request redirected to localhost :)
bin H:\Personal\hobbies\T2A\oobabooga_windows\installer_files\env\lib\site-packages\bitsandbytes\libbitsandbytes_cuda117.dll
INFO:Loading facebook_opt-350m...
INFO:Loaded the model in 1.18 seconds.
INFO:Loading the extension "gallery"...
Traceback (most recent call last):
File "H:\Personal\hobbies\T2A\oobabooga_windows\text-generation-webui\server.py", line 930, in
Done! Press any key to continue . . .
Never mind. Not sure why this would have made a difference probably around the cmd_windows.bat file and then ran the following command to do the update
pip install --force gradio==3.28.3
And then it actually ran this time.
pip install --force gradio==3.28.3
Worked for me too.
I have a similar issue,
INFO:Loading gpt4-x-alpaca-13b-native-4bit-128g...
INFO:Found the following quantized model: models\gpt4-x-alpaca-13b-native-4bit-128g\gpt-x-alpaca-13b-native-4bit-128g-cuda.pt
INFO:Loaded the model in 36.85 seconds.
INFO:Loading the extension "gallery"...
Traceback (most recent call last):
File "R:\AI\one-click-installers-oobabooga-windows\text-generation-webui\server.py", line 885, in
I tried pip install --force gradio==3.28.3 like the other guys did since the error was a similar error but it still doesn't work.
Also tried the force updates to Gradio with no luck.
INFO:Gradio HTTP request redirected to localhost :)
bin D:\oobabooga_windows\installer_files\env\lib\site-packages\bitsandbytes\libbitsandbytes_cuda117.dll
INFO:Loading facebook_opt-350m...
INFO:Loaded the model in 1.20 seconds.
INFO:Loading the extension "gallery"...
Traceback (most recent call last):
File "D:\oobabooga_windows\text-generation-webui\server.py", line 885, in
Done! Press any key to continue . . .
Go into the base folder then installer_files/env and then run ./python.exe -m pip install --force gradio==3.28.3 You have to use the right python version, which in this case is the one in the env folder, which is likely not the same one in your path when just using straight up "pip install" this fixed the issue for me.
I was with you up to the point where you said go to the base folder. Sorry, I'm a noob at this stuff.
What do you mean exactly by" then text-generation-ui/env and then run ./python.exe -m pip install --force gradio==3.28.3"
I went into the text-generation-webui folder and ran cmd in the address bar and imputed that command, but I'm obviously doing it wrong.
Sorry! not text-generation-ui/env but installer_files/env and if you're using cmd then it's just "python.exe -m pip install --force gradio==3.28.3" no ./
Sorry! not text-generation-ui/env but installer_files/env and if you're using cmd then it's just "python.exe -m pip install --force gradio==3.28.3" no ./
Yeah, the things. Thanks heaps, it's working now :D
Just need to change the line in requirements.txt that says: gradio==3.25.0 change it to gradio==3.28.3
v3.28.3
Thanks for suggestion,It's work for me as well.
Same error, but on an Intel Mac. I changed the gradio version in requirements.txt as well as pip install --force gradio==3.28.3, but it didn't help.
If I check my gradio version with pip show gradio it claims to be 3.28.3.
I'd want to try python.exe -m pip install --force gradio==3.28.3 but not sure what the equivalent would be on a mac (obviously not .exe).
And it was working yesterday and I have not changed anything since... very weird!
For anyone using the auto-installers. The pip gradio upgrade alone didn't work for me.
I also edited the ./requirements.txt file with the corrected gradio version and run pip against the updated requirements.
I also had to run the ./update_linux.sh script. That seemed to finally fix the install.
After upgrading gradio, webui now works but now the character selector is broken, I cannot load any characters from the gallery
Don't upgrade gradio, try this instead
pip install gradio_client==0.1.4 gradio==3.25.0
pip install --force gradio==3.28.3
Yes, it's woke for me. My laptop is macbook pro m1.