dalai icon indicating copy to clipboard operation
dalai copied to clipboard

Any idea what I'm doing wrong? No output no matter what.

Open mapleroyal opened this issue 1 year ago • 12 comments

No errors installing, got the official models and converted then quantized them using llama.cpp, dalai loads fine and sees my models, but when I type in the prompt (any or no template) and press "go" my CPUs show no increase in activity and there's never any sign that dalai is actually doing anything beyond the fact that the button changes to a cancel button with a spinner.

??? Anyone else solved this yet?

mapleroyal avatar Mar 23 '23 17:03 mapleroyal

I have got the same problem at the moment, I am working through a couple of things that might be causing it and will update if I find anything. You running on windows?

Moralizing avatar Mar 23 '23 17:03 Moralizing

Linux. Please update if you find out! 100% of all attempts to use dalai have been thwarted on 2 separate linux machines. No matter what I do, it either randomly doesn't see the models until I delete directories and re-create them and re-place the files back in them, or else once it sees the models it just has never actually produced any output.

mapleroyal avatar Mar 23 '23 17:03 mapleroyal

I'm facing the same issue. Not sure what's going on. I'm on Windows 10

taaalha avatar Mar 23 '23 19:03 taaalha

so from what i can see as i am the same point. is when we are clicking the go button its calling an exec\main in this directory dalai\llama\build\Release\main now on my build this is empty and the main.obj isnt there. i found one in the this directory dalai\llama\build\llama.dir\Release and i am trying manually to see if i can get to start from there. Will keep you posted

Moralizing avatar Mar 23 '23 19:03 Moralizing

Ok so this many only work for @taaalha. or anyone else on windows But go into \dalai\llama\build\bin\Release and copy llama.exe to dalai\llama\build\Release and rename it main.exe. Then turn the server back on using the npx dalai serve and now when you pick the llama model from dropdown it works. Credit to @Sociopath in discord for pointing me in the right direction with the .exe rename image_2023-03-23_195258199

Moralizing avatar Mar 23 '23 19:03 Moralizing

Alpaca works for me on Zorin Linux... but all fails miserably on Windows... Maybe to load more models one needs more than 32GB RAM. I will try again...

RIAZAHAMMED avatar Mar 23 '23 23:03 RIAZAHAMMED

yea its rough on windows @RIAZAHAMMED I am running a Ryzen 9 3900 and 64gb of ram and the struggle is real on some of the responses.

Moralizing avatar Mar 24 '23 08:03 Moralizing

Ok so this many only work for @taaalha. or anyone else on windows But go into \dalai\llama\build\bin\Release and copy llama.exe to dalai\llama\build\Release and rename it main.exe. Then turn the server back on using the npx dalai serve and now when you pick the llama model from dropdown it works. Credit to @Sociopath in discord for pointing me in the right direction with the .exe rename image_2023-03-23_195258199

Any reason why my install would not even have a llama folder, let alone anything in it? It installed with no errors. It contains:

  • /home/username/dalai/config/prompts (an empty path)
  • /home/username/dalai/venv (the virtual environment)

It didn't even have the llama folder until I created it to put the models into.

mapleroyal avatar Mar 26 '23 17:03 mapleroyal

More issues where people are complaining of no output:

https://github.com/cocktailpeanut/dalai/issues/281

https://github.com/cocktailpeanut/dalai/issues/221

mapleroyal avatar Mar 26 '23 18:03 mapleroyal

FIGURED IT OUT!!!

Apparently if you have your own models and don't need dalai to download them for you, then npx dalai serve does not actually install the rest of the project. Here's what fixes it:

Even though you already have the models, just run npx dalai llama install 7B. It will do some stuff, and then you can cancel it when it starts to download the model. Delete everything except ggml-vocab.bin in /home/user1/llama.cpp/models/ and put your models in there in folders called 7B, 13B, etc. That's it. The whole problem is that whoever made this has completely neglected to even attempt to read the issues lmfao, or to account for the fact that people have our models already.

mapleroyal avatar Mar 26 '23 18:03 mapleroyal

Hi mapleroyal, when you said: "delete everything except ..." what do you mean exactly. I have only the folder 7B and ggml-vocab.bin. So I don't know what I have to delete. Could you please give more details in the steps that we have to do.

Many thanks in advance for your help.

critong

critong avatar Apr 18 '23 16:04 critong

Hi mapleroyal, when you said: "delete everything except ..." what do you mean exactly. I have only the folder 7B and ggml-vocab.bin. So I don't know what I have to delete. Could you please give more details in the steps that we have to do.

Many thanks in advance for your help.

critong

That was a while ago but I think what I said is likely what I meant to say:

In the folder /home/username/llama.cpp/models/, make sure there isn't anything except that file I mentioned and folders named 7B, 13B etc. There should be NOTHING else in there.

mapleroyal avatar Apr 18 '23 16:04 mapleroyal