linuxmagic-mp

Results 27 comments of linuxmagic-mp

> I started working on a fork of llama-cpp-python for ggllm.cpp, but it's not working yet. Anyone that wants to help is more than welcome. [falcon-cpp-python](https://github.com/sirajperson/falcon-cpp-python) I think that rather...

Interrupting this thread to point out latest warnings from that pull.. Operating System: Ubuntu 20.04.6 LTS Kernel: Linux 5.4.0-147-generic Architecture: x86-64 `lscpu` -> 'Model name: Intel(R) Core(TM) i5-7400 CPU @...

Might also want to update the README, mention that you have to move (or symbolic link) the original tokenizer.json to the new location of the ggml formats, before quantizing.. (or...

Fine by me, just want to make sure I am keeping up ;) @cmp-nct did we break something in the latest batch of commits? make clean && make failed to...

Yes, I guess that is important to include. Brand new Nvidea 4090 24GB, and thanks for the tip on the -t settings, however I do want to use the GPU.....

Yes, 'make' works fine. Other than the warnings I noted in discussions. in the examples. But with so many cooks in the kitchen, didn't want to actually do any clean...

Note: Using the standard 'make' method, was able to safely convert the model with use32 option, and other than missing a few steps, eg manually having to do `make falcon_convert`...

That helped.. I will close this thread for now... Will update ot create a new ticket with the warnings.. a lot more after updating to the lastes ;) And see...

Interesting, will have to wait a couple days before in front of the machine again, but will have to compare notes. Just considering whether to use the Falcon 40B Instruct...