lollms-webui
lollms-webui copied to clipboard
Model conversion error
Expected Behavior
Conversion of model gpt4all-lora-quantized-ggml.bin
Current Behavior
Do you want to convert the selected model to the new format? [Y,N]?Y
Converting the model to the new format... Cloning into 'tmp\llama.cpp'... remote: Enumerating objects: 1707, done. remote: Counting objects: 100% (1707/1707), done. remote: Compressing objects: 100% (623/623), done. remote: Total 1707 (delta 1088), reused 1629 (delta 1050), pack-reused 0Receiving objects: 100% (1707/1707), 1.52 MiB | Receiving objects: 100% (1707/1707), 1.87 MiB | 3.10 MiB/s, done.
Resolving deltas: 100% (1088/1088), done. 1 file(s) moved. C:\Users\jtone\AppData\Local\Microsoft\WindowsApps\PythonSoftwareFoundation.Python.3.10_qbz5n2kfra8p0\python.exe: can't open file 'C:\gpt4all-ui\tmp\llama.cpp\migrate-ggml-2023-03-30-pr613.py': [Errno 2] No such file or directory
Error during model conversion. Restarting... 1 file(s) moved.
Steps to Reproduce
run install.bat press B Select Y option Select Y option
Possible Solution
I manually copied the missing file into the tmp folder that was created and it worked as expected.
Context
Os Windows 11 Cpu AMD Ryzen 7 5700G Ram 32.0 GB (27.9 GB usable)
Same error
And run.bat only shows this :
I apologise i thought i pasted the link to the file. it is located here: https://github.com/ggerganov/llama.cpp/blob/3525899277d2e2bdc8ec3f0e6e40c47251608700/migrate-ggml-2023-03-30-pr613.py
save it as the file name mentioned and place it into the tmp folder when you run the run.bat file
happened to find that py file and placed it in tmp/llama.cpp/ but here are the results.
anyways, isnt it for old models only. we dont need conversion if we already have ggml version, right ?
for some reason i get a different error:
Hi guyes. It seems that the llama.cpp repo has changed again and there is no access to the migration tool. So I have decided to upload my own converted model to the Hugging face repo and I'll ditch the conversion step altogether.
I am on vacation and have limited connection so sending the model will take few hours.
When It's ready I'll change all install scripts to use it.
I hope this will help you all make it work.