Rahul Vivek Nair
Rahul Vivek Nair
> hardcoding Turbo into project codebase is still under license investigation since some online commercial service also use fooocus for common users wheras Turbo is kind of NC model >...
> > > hardcoding Turbo into project codebase is still under license investigation since some online commercial service also use fooocus for common users wheras Turbo is kind of NC...
Yeah, I have the same problem
Can't find CMakeLists.txt, it seems to be using a python binding to llama.cpp instead. Cant find where to compile with the flags ON. @gabriel-peracio
I'm using Google Chrome.
Same, It gives gibberish output only when layers are offloaded to the gpu via -ngl. Without offload it works as it should.I had to roll back to then pre cuda...
> Thank you for reporting this issue. Just to make sure: are you getting garbage outputs with all model sizes, even with 7b? I've tried it with all the types...
> Alright. I currently don't have CUDA installed on my Windows partition but I'll go ahead and install it to see if I can reproduce the issue. Is it working...
Can confirm, ran under WSL and the output is as expected. Something wrong only on the windows side with the gibberish output.
It is saved in the checkpoints folder