Pi

Results 108 comments of Pi

That's not related to the app. Its just the model. I think freedomGPT will implement a settings page soon so once that is released, you can change the temp settings...

> That is: > CPU: Intel(R) Pentium(R) Gold G5400 CPU @ 3.70GHz 3.70 GHz > GPU: nVidia Geforce GTX 1650 4GB > RAM: 8,00 GB > > System type: 64-bit,...

> Sorry, it's unusably slow. I have an Intel i7-12700F, 32 GB RAM, GeForce GTX 1080Ti. I'm not going to leave it on overnight to wait for an answer. Disappointed....

why? if you want to use colab, then just use oobabooga text generation webbui or the llama.cpp CLI.

it doesnt have context, which means it doesnt know what you're referring to when you say "wrong". So it just made up a random response that's not related at all...

it's alpaca.cpp https://github.com/ohmplatform/FreedomGPT/commit/e28a8a8651d33cd901d97f3ec718b22940beba70

This uses alpaca.cpp, which has been abandoned. Only the new llama.cpp, which now has support for alpaca, supports context memorization.

no you cant force the model to produce more output. unless the output has been cut, this is normal and expected behavior. i'm closing this issue now.

> not sure why you're still checking for sysThreads == 4 but code-wise it looks OK I'm doing that so that when there are 4 threads, it will use all...

> in this case you should use i then if sysThreads = 4 it would also go into the loop. > i think your target is setting the threads to...