gpt4all
gpt4all copied to clipboard
Add automatic platform selection for gpt4all-lora-quantized
This is tested on both Windows 10 and Debian 10.
Untested on OSX and MAC. But should work. Need reports as I have no such hardware to test.
What it does: It will allow to simplify the Try yourself section in the README.md
The OS will be selected by a generic command ./gpt4all-lora-quantized
instead of manual selection by:
M1 Mac/OSX: cd chat;./gpt4all-lora-quantized-OSX-m1
Linux: cd chat;./gpt4all-lora-quantized-linux-x86
Windows (PowerShell): cd chat;./gpt4all-lora-quantized-win64.exe
Intel Mac/OSX: cd chat;./gpt4all-lora-quantized-OSX-intel
Old
New
- Download the
gpt4all-lora-quantized.bin
file from Direct Link or [Torrent-Magnet]. - Clone this repository, navigate to
chat
, and place the downloaded file there. - Run the appropriate command for your OS:
- Windows
- (Batch)
cd chat && .\gpt4all-lora-quantized
- (PowerShell)
cd chat; ./gpt4all-lora-quantized
- (Batch)
- Linux/Mac/OSX
cd chat; chmod +x ./gpt4all-lora-quantized; ./gpt4all-lora-quantized
- Windows
All old arguments are supported and tested.
./gpt4all-lora-quantized --help
./gpt4all-lora-quantized --model .\gpt4all-lora-quantized.bin
...
Tested on macOSX M1
System Info:
Results attached:
Thank you for proving that it is stable and that it works.
Pull request is ready to be merged.
Out of date