kth8
kth8
Here are 2 ways to I used to install using pip, first with conda: ``` conda create -n py310-piper python=3.10 -y conda run -n py310-piper pip install piper-tts conda run...
I think dynamically loading models via API is already accomplished by [llama-swap](https://github.com/mostlygeek/llama-swap).
> > I think dynamically loading models via API is already accomplished by [llama-swap](https://github.com/mostlygeek/llama-swap). > > Yeah but making that accessible in the main project in a user friendly way...
I tried your characters using a Docker container and didn't encounter any errors. ``` $ docker run --rm ghcr.io/kth8/bitnet build/bin/llama-cli --model Llama3-8B-1.58-100B-tokens-TQ2_0.gguf --prompt "£" ... system_info: n_threads = 4 (n_threads_batch...
Code is in my repo if you want to take a look: https://github.com/kth8/bitnet
The Hugging Face repo is not mine. I just linked to it as reference for where I got the GGUF model file.
I found this app recently on /r/locallama which got me interested in checking it out. I want to install using homebrew but instead found this issue so I wrote a...
well the script works with `brew install ./vibe.rb`, submitting it upstream is another matter with the quarantine situation.
I uploaded all the source code of this program to my AI and it responded with: ``` LogiOps doesn't have a simple, direct configuration option like vertical_scroll_speed or scroll_multiplier within...
https://huggingface.co/brunopio/Llama3-8B-1.58-100B-tokens-GGUF