lollms-webui
lollms-webui copied to clipboard
run after install - error
Expected Behavior
all is ok
Current Behavior
error on run
Steps to Reproduce
Checking discussions database...
llama_model_load: loading model from './models/gpt4all-lora-quantized-ggml.bin' - please wait ...
llama_model_load: GPTQ model detected - are you sure n_parts should be 2? we normally expect it to be 1
llama_model_load: use '--n_parts 1' if necessary
llama_model_load: n_vocab = 32001
llama_model_load: n_ctx = 512
llama_model_load: n_embd = 5120
llama_model_load: n_mult = 256
llama_model_load: n_head = 40
llama_model_load: n_layer = 40
llama_model_load: n_rot = 128
llama_model_load: f16 = 4
llama_model_load: n_ff = 13824
llama_model_load: n_parts = 2
llama_model_load: type = 2
llama_model_load: ggml map size = 9702.04 MB
llama_model_load: ggml ctx size = 101.25 KB
llama_model_load: mem required = 11750.14 MB (+ 3216.00 MB per state)
llama_model_load: loading tensors from './models/gpt4all-lora-quantized-ggml.bin'
llama_model_load: model size = 9701.60 MB / num tensors = 363
llama_init_from_file: kv self size = 800.00 MB
Traceback (most recent call last):
File "D:\gpt4all\gpt4all-ui\app.py", line 468, in <module>
bot = Gpt4AllWebUI(app, args)
File "D:\gpt4all\gpt4all-ui\app.py", line 98, in __init__
self.prepare_a_new_chatbot()
File "D:\gpt4all\gpt4all-ui\app.py", line 114, in prepare_a_new_chatbot
self.condition_chatbot()
File "D:\gpt4all\gpt4all-ui\app.py", line 130, in condition_chatbot
if self.db.does_last_discussion_have_messages():
File "D:\gpt4all\gpt4all-ui\db.py", line 162, in does_last_discussion_have_messages
last_message = self.select("SELECT * FROM message WHERE discussion_id=?", (last_discussion_id,), fetch_all=False)
File "D:\gpt4all\gpt4all-ui\db.py", line 86, in select
cursor = conn.execute(query, params)
sqlite3.InterfaceError: Error binding parameter 0 - probably unsupported type.
Possible Solution
fix sql code
Context
none
Screenshots
none
restart and it works, but error happens on the first run
I think you need to convert the ggml to ggjt
(env) root@AI:/home/gpt4all-ui# python app.py --host 192.168.0.180 --model gpt4all-lora-quantized-ggjt.bin Checking discussions database... llama_model_load: loading model from './models/gpt4all-lora-quantized-ggjt.bin' - please wait ...
I think the current version of this repo has fixed it.
the problem is probably due to the migration to the new database format. this should be the last time we run into this problem as we have created a versionning system so in next upgrades, the system will upgrade the database.
Sorry for the inconvenience.