tofasthacker
tofasthacker
I am running ubuntu in a vm with virutual box. Here is the out put of the lscpu command. Thank you for you help so far. brickman@Ubuntu-brickman:~/Desktop$ lscpu Architecture: x86_64...
I ran the command with g++ and this is what I got. brickman@Ubuntu-brickman:~/Desktop/llama.cpp$ make g++ I llama.cpp build info: I UNAME_S: Linux I UNAME_P: x86_64 I UNAME_M: x86_64 I CFLAGS:...
I updated my g++ version and tried your command but I am still having trouble. I really appreciate your help. Thank you. brickman@Ubuntu-brickman:~/Desktop/llama.cpp$ g++ -I. -I./examples -O3 -DNDEBUG -std=c++11 -fPIC...
Looks like the same error. brickman@Ubuntu-brickman:~/Desktop/llama.cpp$ make clean;make I llama.cpp build info: I UNAME_S: Linux I UNAME_P: x86_64 I UNAME_M: x86_64 I CFLAGS: -I. -O3 -DNDEBUG -std=c11 -fPIC -pthread -mavx...
Ok, I will try to run the command on a real machine in a couple of hours. Did you put your docker image on docker hub?
I was Successfully able to build the model when was not in a virtual machine. But Now I am wondering were I can download the LLAMA model.
Thank You for all your help.