lollms-webui icon indicating copy to clipboard operation
lollms-webui copied to clipboard

ERROR - Exception on /update_model_params [POST]

Open jagbarcelo opened this issue 1 year ago • 1 comments

Expected Behavior

When browsing through the UI, we go to Settings tab, do any change (or not) and click on the button Update parameters. I suppose this action should update the file ./configs/local_default.yaml with the current data shown in the fields.

Current Behavior

However, after clicking on the button Update parameters, an error appears in the browser: Error setting configuration. The console shows the following too:

[2023-05-06 14:14:15,966] {app.py:1414} ERROR - Exception on /update_model_params [POST]
Traceback (most recent call last):
  File "D:\gpt4all-ui\GPT4All\env\lib\site-packages\flask\app.py", line 2190, in wsgi_app
    response = self.full_dispatch_request()
  File "D:\gpt4all-ui\GPT4All\env\lib\site-packages\flask\app.py", line 1486, in full_dispatch_request
    rv = self.handle_user_exception(e)
  File "D:\gpt4all-ui\GPT4All\env\lib\site-packages\flask\app.py", line 1484, in full_dispatch_request
    rv = self.dispatch_request()
  File "D:\gpt4all-ui\GPT4All\env\lib\site-packages\flask\app.py", line 1469, in dispatch_request
    return self.ensure_sync(self.view_functions[rule.endpoint])(**view_args)
  File "D:\gpt4all-ui\GPT4All\app.py", line 596, in update_model_params
    self.config['temperature'] = float(data["temperature"])
KeyError: 'temperature'

Steps to Reproduce

Everytime we try to Update parameters, the same error appears. Always complaining about the temperature parameter (which is the first float value in the UI). It doesn't matter if we try to set temperature to 1 (a float value too but that might be converted to a string 1 without any dot or comma).

Possible Solution

We are using an Spanish localised version of Windows. This might be an error due to decimal places being converted using a comma instead of a dot (or vice versa). Maybe the conversion of the values should be done using an invariant culture, so that it doesn't matter the locale of the system running gpt4all-ui.

Context

Current contents of our ./config/local_default.yaml file are:

version: 3
config: default
ctx_size: 2048
db_path: databases/database.db
debug: false
n_threads: 8
host: localhost
language: en-US
# Supported backends are llamacpp and gpt-j
backend: llama_cpp
model: gpt4all-lora-quantized-ggml.bin
n_predict: 1024
nb_messages_to_remember: 5
personality_language: english
personality_category: generic
personality: gpt4all
port: 9600
repeat_last_n: 40
repeat_penalty: 1.2
seed: 0
temperature: 0.9
top_k: 50
top_p: 0.95
voice: ""
use_gpu: false # Not active yet
auto_read: false
use_avx2: true # By default we require using avx2 but if not supported, make sure you remove it from here
use_new_ui: false # By default use old ui
override_personality_model_parameters: false #if true the personality parameters are overriden by those of the configuration (may affect personality behaviour) 

As you can see, temperature is already there and set to 0.9 (with a dot).

Screenshots

image

jagbarcelo avatar May 06 '23 12:05 jagbarcelo

Hi and thank you for your remark. Yes this is because we have changed the temp to temperature and as we are wotking on the new UI, I forgot to change that also in the old ui.

Now the bug is fixed. Once again, thank you for testing the application. The new UI is going to be amazing and way better than this one. You can have a sneak peak on this ui by setting the parameter use_new_ui to true in your configs/config_local.yaml file It is still in beta and not fully functional but it has most of the needed functionalities.

Best regards

ParisNeo avatar May 06 '23 12:05 ParisNeo