gpt4all
gpt4all copied to clipboard
Don't allow zero as temperature in ModelSettings.qml
Describe your changes
A number lower than 0.000001 is set to that value instead.
Issue ticket number and link
N/A
Checklist before requesting a review
- [x] I have performed a self-review of my code.
- [ ] If it is a core feature, I have added thorough tests.
- [ ] I have added thorough documentation for my code.
- [x] I have tagged PR with relevant project labels. I acknowledge that a PR without labels may be dismissed.
- [ ] If this PR addresses a bug, I have provided both a screenshot/video of the original bug and the working solution.
Demo
Editing:
After edit:
Steps to Reproduce
Right now, setting temperature to zero is allowed, but it shouldn't be.
Notes
This PR is meant to be seen as a suggestion for now.
I'm setting this as draft for the following reasons:
- Maybe this should be handled in the backend instead, and maybe 0 as temperature should be allowed again.
- I think it prevents setting zero as a value, but it seems to be flaky in the UI regardless (e.g. when I Alt-Tab).
- I have not figured out how to get rid of the scientific notation after it corrects itself.
Second commit improves points 2 & 3 from the notes.
The improved behaviour uses .toLocaleString(), although it's probably the wrong locale. I think it should be the one from the selected translation, once those are ready.
Edit: Parsing might need some changes with different locales, too.
Edit 2: Turns out you're using QLocale::setDefault() in the translation logic and QML's Qt.locale() does the right thing with that.
Should be good now, apart from a minor cosmetic glitch (rare corner case).
I tested it by adding to: https://github.com/nomic-ai/gpt4all/blob/6b8e0f7ae4bc032cd926e27547d6359da92a0f30/gpt4all-chat/main.cpp#L53-L54 on line 54:
QLocale::setDefault(QLocale("de_DE"));
Try to break it, hope these should not be possible anymore:
- No temperature of 0 or below.
- Only commas are allowed as decimal separator with
de_DEas locale.
Although note: this is not hooked up to translation dropdown/locale changes. That would be the next thing to work on for all input fields which need to be locale-aware.
@cebtenzzre Just to be clear, part of this is also making the UI input locale-aware (decimal separator can vary).
The primary goal here was to fix a bug: there's a division by zero, which is getting hit right now in the backend (oddly seems to only throw off debug builds, but that's my builds). Greedy sampling with temp 0 was once possible in GPT4All (sometime last year), I don't know when exactly that changed.
I am not at all opposed to removing the limit again if/once there's a fix for that in the backend -- actually, I am very much in favour of being allowed to set it to 0.
@cebtenzzre Just to be clear, part of this is also making the UI input locale-aware (decimal separator can vary).
That's appreciated. If you'd like to make a PR or change this one such that all four instances of parseFloat in ModelSettings.qml are replaced with something locale-aware, that'd be great.
The primary goal here was to fix a bug: there's a division by zero, which is getting hit right now in the backend (oddly seems to only throw off debug builds, but that's my builds). Greedy sampling with temp 0 was once possible in GPT4All (sometime last year), I don't know when exactly that changed.
But only llama.cpp divides by zero, because of a missing branch. We are actively working (#2806) towards having first-class support for backends other than llamamodel.cpp, so I feel like adding more llamamodel-specific assumptions to the UI code is a step in the wrong direction. Both ollama and OpenAI support temp=0.
I am not at all opposed to removing the limit again if/once there's a fix for that in the backend -- actually, I am very much in favour of being allowed to set it to 0.
Okay, done: #2854
That's appreciated. If you'd like to make a PR or change this one such that all four instances of parseFloat in ModelSettings.qml are replaced with something locale-aware, that'd be great.
Yes, that was the idea after this one. Although there may be one other thing to consider: In this PR (and probably the next) I had to remove the direct connection in the QML (text: root.currentModelInfo.temperature) because of the parsing and conversion.
An alternative would be to have the model be "more closely aligned" with the UI and do the conversions in C++ instead of JS.
In any case, with merging of #2854 this PR is now obsolete.
Although there may be one other thing to consider: In this PR (and probably the next) I had to remove the direct connection in the QML (
text: root.currentModelInfo.temperature) because of the parsing and conversion.
That shouldn't be a problem, since the binding is removed once onTemperatureChanged or onCurrentModelInfoChanged fires anyway. As long as the value is never initially blank—in which case, you may need a Component.onCompleted handler.