Postconceptlab
Postconceptlab
how to change batch size in the run setting template ?
is there a way to use CPU instead of GPU to have not th eprobleme of cuda ?? may be for more time of work for the computer ?
same issue on unbuntu as benbois ubuntu
try to resart the app that's the message in the terminal yarn run v1.22.19 $ electron-forge start ✔ Checking your system ✔ Locating application ✔ Loading configuration ✔ Preparing native...
i forgot to do make llama.cpp now it load but when i ask a question seems that the ia no responds leave a blank response i have also those warning...