llamafile
llamafile copied to clipboard
All Sorts of Issues Executing (WSL and Windows)
Hey guys, So I'm having a difficult time getting certain files t but does not work on wsl when I leave it as a lama file o load. Here's one example, the below file works on windows if I change it to an exe But fails to work when I leave it as a llamafile for WSL. (and
cognibuild@DESKTOP-I6N5JH7:/mnt/e/OneClickLLMs$ chmod +x rocket-3b.Q5_K_M.llamafile.exe rocket-3b.Q5_K_M.llamafile
cognibuild@DESKTOP-I6N5JH7:/mnt/e/OneClickLLMs$ ./rocket-3b.Q5_K_M.llamafile.exe rocket-3b.Q5_K_M.llamafile -ngl 9999
-bash: ./rocket-3b.Q5_K_M.llamafile.exe: Invalid argument
Then theres this, which i can't get to run on either Windows or WSL (with extension properly changed)
Meta-Llama-3-8B-Instruct.Q5_K_M.llamafile.exe
"This App Cant Run on your PC" (Big blue screen)
Any advice is appreciated
Came here to report a very similar experience.
$ chmod +x Meta-Llama-3-70B-Instruct.Q4_0.llamafile
$ ./Meta-Llama-3-70B-Instruct.Q4_0.llamafile -ngl 9999
./Meta-Llama-3-70B-Instruct.Q4_0.llamafile: Invalid argument
I'm running exactly what the README says to run and it doesn't do the thing. But I had downloaded the original llamafile when it was first released and that version worked fine. What has changed between that release and this one?
Renaming to end in .exe and running directly on Windows instead, and I get this:
from the README
Unfortunately, Windows users cannot make use of many of these example llamafiles because Windows has a maximum executable file size of 4GB, and all of these examples exceed that size. (The LLaVA llamafile works on Windows because it is 30MB shy of the size limit.) But don't lose heart: llamafile allows you to use external weights; this is described later in this document.
I want to know how to reduce the size to < 4GB
This seems to work on windows: remake the llamafile from releases page to .exe
.\llamafile.exe -m "path\to\gguf\file.gguf" -ngl 9999
from the README
Unfortunately, Windows users cannot make use of many of these example llamafiles because Windows has a maximum executable file size of 4GB, and all of these examples exceed that size. (The LLaVA llamafile works on Windows because it is 30MB shy of the size limit.) But don't lose heart: llamafile allows you to use external weights; this is described later in this document.
I want to know how to reduce the size to < 4GB
The Readme says to download the weights separately in order to run the llamafile on windows.
With Windows it works great.. just unzip the file and you can load it separately with a .bat file.
As for the WSL, the.sh file should run. But it's not
Downloading llamafile-0.8.1 from the releases page, then renaming it to have an .exe extension, and using that to run the model worked for me.
It would be nice if the project's readme had similar instructions:
.\llamafile-0.8.1.exe -m "Meta-Llama-3-70B-Instruct.Q4_0.llamafile.exe" --server -ngl 9999
On an RTX 3090, I get 0.5 tokens per second.
Ran into same issue.
./Meta-Llama-3-8B-Instruct.Q5_K_M.llamafile: Invalid argument
when I disabled WIN32 interop feature as follow:
[interop]
enabled=false
got the following message:
<3>WSL (2233) ERROR: UtilAcceptVsock:250: accept4 failed 110
Ran into same issue.
./Meta-Llama-3-8B-Instruct.Q5_K_M.llamafile: Invalid argument
when I disabled WIN32 interop feature as follow:
[interop] enabled=false
got the following message:
<3>WSL (2233) ERROR: UtilAcceptVsock:250: accept4 failed 110
Same here, have you found a fix?
unfortunately not. I abandoned working with this project for now and have put my attention on KoboldCPP
On Thu, May 16, 2024 at 6:26 AM Esteban Thilliez @.***> wrote:
Ran into same issue.
./Meta-Llama-3-8B-Instruct.Q5_K_M.llamafile: Invalid argument
when I disabled WIN32 interop feature as follow:
[interop] enabled=false
got the following message:
<3>WSL (2233) ERROR: UtilAcceptVsock:250: accept4 failed 110
Same here, have you found a fix?
— Reply to this email directly, view it on GitHub https://github.com/Mozilla-Ocho/llamafile/issues/356#issuecomment-2115113617, or unsubscribe https://github.com/notifications/unsubscribe-auth/AOHHQUTU7R6HYFUSDT72EFLZCSQXLAVCNFSM6AAAAABGTF3QSWVHI2DSMVQWIX3LMV43OSLTON2WKQ3PNVWWK3TUHMZDCMJVGEYTGNRRG4 . You are receiving this because you authored the thread.Message ID: @.***>
Same error here, with interlop disabled.
./llava-v1.5-7b-q4.llamafile
<3>WSL (273) ERROR: UtilAcceptVsock:250: accept4 failed 110
Same:
./llava-v1.5-7b-q4.llamafile
<3>WSL (667) ERROR: UtilAcceptVsock:250: accept4 failed 110
[Unit]
Description=cosmopolitan APE binfmt service
After=wsl-binfmt.service
[Service]
Type=oneshot
ExecStart=/bin/sh -c "echo ':APE:M::MZqFpD::/usr/bin/ape:' >/proc/sys/fs/binfmt_misc/register"
[Install]
WantedBy=multi-user.target
Put this in /etc/systemd/system/cosmo-binfmt.service
Then sudo systemctl enable cosmo-binfmt
.
To fix the "invalid argument" error in WSL, I ran both of these and then tried again, which worked:
sudo sh -c 'echo -1 > /proc/sys/fs/binfmt_misc/WSLInterop sudo sh -c 'echo -1 > /proc/sys/fs/binfmt_misc/WSLInterop-late
[Unit] Description=cosmopolitan APE binfmt service After=wsl-binfmt.service [Service] Type=oneshot ExecStart=/bin/sh -c "echo ':APE:M::MZqFpD::/usr/bin/ape:' >/proc/sys/fs/binfmt_misc/register" [Install] WantedBy=multi-user.target
Put this in
/etc/systemd/system/cosmo-binfmt.service
Then
sudo systemctl enable cosmo-binfmt
.
I have completed this and the steps by zvan92 as well. However I'm still getting the "<3>WSL (460) ERROR: UtilAcceptVsock:251: accept4 failed 110" error that orangewise mentions above.
That accept4 error is a bug in WSL. Please direct your feedback to Microsoft. https://github.com/microsoft/WSL/issues/8677