Marlon Regenhardt
Marlon Regenhardt
It seems I broke llama 7b full on FreedomGPT, I just gave it this prompt: ``` format a double value `val` with two digits after the decimal, left padded to...
If similar models are on a similar level of performance this seems perfectly fine for smaller tasks that don't require a bigger conversation. I tested on a Ryzen 2700 and...
Maybe we can use the lib of https://github.com/dranger003/llama.cpp-dotnet, given we actually get it to work as for some reason I can't build it and I'm not sure why. I opened...
Nice, you make it look simple to implement. I added the possible libraries to my earlier overview comment. I would however use something else than a regex for evaluation, since...
I experimented a bit [using Code Llama in LlamaSharp](https://github.com/Regenhardt/LLamaSharp/blob/feature/coding-assistant/LLama.Examples/NewVersion/CodingAssistant.cs) and...well it works, but not great. I'm not sure what to do about other parameters or the system prompt, setting the...
Not sure what actually changed, but after using the GGUF version of CodeLlama, I get much better results: 
I get maybe 2-3 words a second, after ~7 seconds on waiting until it starts generating. It runs on my CPU, so there's probably huge potential to even use bigger...
I have the same problem using tensorflow/keras 2.15.0 for a simple model. ``` Model: "sequential" _________________________________________________________________ Layer (type) Output Shape Param # ================================================================= lstm (LSTM) (None, 1) 272 dense (Dense)...
How do I put that into the container? The tensorflow image comes with 3.11.0rc. Secretly I hope there'll be a new container that lets me just port the whole thing...
- [ ] `PublicKeyCredentialRpEntity` needs an actual description I suggest *mostly* the one from the WebAuthn [spec](https://www.w3.org/TR/webauthn-2/#webauthn-relying-party), even though it doesn't really include the possibility for a FIDO2 implementation in...