gpt4all
gpt4all copied to clipboard
[Feature] Improve diagnostics when loading fails due to incompatible model type
Feature Request
Currently, when there is an error loading the model, the following explanation is provided:
Model loading failures can happen for a variety of reasons, but the most common causes include a bad file format, an incomplete or corrupted download, the wrong file type, not enough system RAM or an incompatible model type.
Among these fairly disparate causes for an error, I would argue that the last one (incompatible model type) is different from others because it is not really an error: there is no corrective action, i.e. there is nothing the user can or should do about it, and the app itself is working as it should.
Looking at the various Github bug reports, it appears that this tends to confuse some users who load incompatible models, and then report the error they get as a some sort of GPT4All bug. Since the offending model type is not reported with the error, even the users who know what they're doing cannot be certain about the actual cause.
An error due to incompatible model type is presumably detected programmatically, so here it should be both easy and beneficial to:
- Clearly report it as such in the app
- Provide some basic context (e.g. which model type was actually found in the weights file)