gpt4all
gpt4all copied to clipboard
Make it possible to keep some chats after downgrading GPT4All
trafficstars
What this PR fixes:
- Deserialization didn't explicitly handle future versions, so opening older GPT4All with newer chats saved was ill-defined (usually hangs).
- Chats were saved unconditionally, so updating GPT4All automatically upgraded all of your chats such that previous versions cannot read them.
Other things I fixed while trying to pin down Q_UNREACHABLE crashes:
- A missing serialization case for API models, which happened to work before because of compiler optimization of Q_UNREACHABLE in Release builds.
- This would cause Debug builds to always crash on exit if you had ever previously saved an API chat with a Release build.
- GPT-J state from v2.4.x and older was incorrectly recognized as Llama state, among other conflicts.
- m_llModelType was not initialized when a Chat is constructed, which could cause UB if it was ever read.
- If a chat file had an unsupported model type, we would either crash or load the chat anyway.
Out of scope for this PR:
- #2626 (saw while testing)
- Saving chat files eagerly rather than at exit
- Handling SIGINT to save chat files when interrupted from command line
- Not losing the chat if the first message is still generating while we exit