llama.go icon indicating copy to clipboard operation
llama.go copied to clipboard

llama.go is like llama.cpp in pure Golang!

Results 16 llama.go issues
Sort by recently updated
recently updated
newest added

looking forward to latest model and running 4bit quantisation on windows.

@mfreeman451 possible to take this forward with a forked repo? u seemed to be the only other contributor who knows how to take this forward. i can code golang and...

I quantized the llama 7b-chat model by llama.cpp, and get model ggml-model-q4_0.gguf. But llama.go seems not support the gguf version, it shows the error: ` [ERROR] Invalid model file '../llama.cpp/models/7B/ggml-model-q4_0.gguf'!...

https://item.jd.com/10076686823591.html#crumb-wrap Hi, I have a Lenovo Ren 9000 desktop computer here. For specific configuration, please refer to the shopping mall purchase link. Use the lscpu command to check that the...

Is the project ready for production use? What is the minimum required hardware to run version 7B? (recommended CPU? how many CPU threads?) Can the project handle 8GB of RAM?...

I fix a typo V3, Spring 24 instead of Spring 23. Please allow me to say a few words of encouragement about your repositories: I want to express my gratitude...

``` ./llama-go-v1.4.0-linux --model=guanaco-3b-uncensored-v2.ggmlv1.q4_0.bin --prompt="write a story about alibaba and snow white" /▒▒ /▒▒ /▒▒▒/▒▒▒ /▒▒/▒▒▒▒/▒▒ /▒▒▒/▒▒▒ /▒▒▒▒/▒▒ /▒▒▒/▒▒▒ /▒▒▒ /▒▒▒ /▒▒▒/ /▒▒▒ /▒▒▒/▒▒▒▒/▒▒▒ /▒▒▒/ /▒▒▒ /▒▒▒▒ // /▒▒▒▒//▒▒▒ /▒▒▒▒/▒▒ /▒▒▒▒/▒▒...

I am able to run llama-go-v1.exe. It gives the same output as that of in Readme file. It gives me " Loading model, Please wait...". later REST Server ready on...

would be great to see how this code stands in comparison to C++ nice project btw