Martin Evans
Martin Evans
For reference, the build script which is used to generated the binaries is [here](https://github.com/SciSharp/LLamaSharp/blob/master/.github/workflows/compile.yml). Every binary included in any of our backend packages is built with this action, you can...
I'll close this one now since the question was answered and there hasn't been any activity for a while.
I don't think it's _possible_ to support generative models with the current llama.cpp API, unless I'm misunderstanding something. I hope so, because I agree being able to use generative models...
Probably relevant: https://github.com/ggerganov/llama.cpp/pull/6753
> you need to change the index to -1! Aha, thanks for looking into that! Since a parameter of `-N` means "N from the end" that kind of makes sense...
I think this embedding stuff was resolved a whole ago so I'll close this issue now.
This is a pretty interesting idea, it does solve the explosion of backends DLLs we have but still keeps the advantage of feature auto selection for end-users. My main concern...
> it does not provide debug versions It _could_ do if we wanted to (just add a `WithLLamaCppDebug(true)` method). However there have been issues before with debug builds not working...
> A game developer you mentioned above would just run CMake with 2-3 configurations But what 2-3 configurations would they choose (there are a lot more than 2-3 possible configurations)?...
That's basically how it works at the moment - you can have multiple backends installed and LLamaSharp will choose which one to load based on what hardware is available (e.g....