Dennis Aleynikov
Dennis Aleynikov
your a52 might just not allow termux to allocate that much ram. weird given you have 6GB
Did you compile with the NDK instructions first? @aicoat on raspberrypi it's possible to enable swap which afaik I haven't gotten working on Android yet. If I ever get my...
I'm curious if this works for you when compiling locally @dniku last time I tried it, I had success without having to compile it on NDK https://github.com/antimatter15/alpaca.cpp
I've attempted this and it worked for me once, but I couldn't get anything to generate. I'm having issues with sci-kit and torch for now
I have a 32GB m2 pro and would love to get this working. so far it only runs in CPU mode and performs fine. 4chanGPT works very fast, LLaMA runs...
Yeah I'll consider an even beefier m3 with 128gb of ram but for now the whole machine learning community seems kneecapped by the pytorch implementation being half done (especially compared...
I got the pygmalion-2.7B model running! Thank you, one step closer to the dream :)  No surprise it was very upset at trying to actually run the LLaMA model,...
https://github.com/ggerganov/llama.cpp looks like someone figured out LLaMA 7B on apple silicon in case anyone here is interested!
Afaik the zip contains parts of the MPS code working for Pygmalion It has not been tested for LLaMA and other models. I haven't checked since my initial reply
I'm trying to run it on a chromebook but I've hit a segfault :(  one thing I may have done wrong is that I reused the same .bin I...