niansa/tuxifan
niansa/tuxifan
I've added testing instructions to the top post. :-)
> Hello ! Thanks for the hard work. > > I'm on Linux with iris xe integrated GPU (OpenCL compatible). Is there any chance of working ? I've forced "buildVariant...
> Any idea about how I could speed that up ? Nope. Integrated graphics are pretty much unsuitable for this. But this should be enough to show that it's working!...
Yeah, CUDA setup should be documented in the `llama.cpp` repo
Some compile issue on MSVC has been found and will be solved soon @cosmic-snow! Will notify you about more.
@cosmic-snow thanks for the testing efforts!! Please note that MPT/GPT-J isn't supported in the new GGML formats yet. I have added the missing compile defines to the CMake file for...
I appologize, there was a little mistake in the `llama.cpp.cmake` :-) That should be solved now. Again, thanks a lot for testing all this!
No! Back on track :-)
@cosmic-snow lots of stuff has happened, specially significantly, cmake fixes. I'd suggest trying again now, if you want :+1:
Done