Adam Mikulis

Results 15 comments of Adam Mikulis

I was thinking of transitioning to the BatchedExecutor before adding in conversation save/loading but wanted your thoughts. As I want this to be able to be used for NPCs in...

> There are things available in LLamaSharp to do all of these things except for scheduling, but for now if you want to use the BatchedExecutor it'll be up to...

If llama3 is being overly verbose, add "" to the AntiPrompts. I've had good results with it so far and it seems to have more personality than Mistral Instruct v0.2.

Will [PR #6920 from llama.cpp](https://github.com/ggerganov/llama.cpp/pull/6920) resolve this issue?

Hi @vvdb-architecture , if it is not using your GPU even with the Cuda backend do you have your GpuLayerCount in your ModelParams set to -1, or 1-33? If it...