alpaca.cpp
                                
                                 alpaca.cpp copied to clipboard
                                
                                    alpaca.cpp copied to clipboard
                            
                            
                            
                        Locally run an Instruction-Tuned Chat-Style LLM
main: seed = 1716909502 llama_model_load: loading model from 'ggml-alpaca-7b-q4.bin' - please wait ... llama_model_load: failed to open 'ggml-alpaca-7b-q4.bin' main: failed to load model from 'ggml-alpaca-7b-q4.bin'
Hi, I have strange results in 13B. This is the excepted result? ``` ./chat -m ggml-alpaca-13b-q4.bin main: seed = 1679253871 llama_model_load: loading model from 'ggml-alpaca-13b-q4.bin' - please wait ... llama_model_load:...
Anytime I input a prompt more than a couple of lines, alpaca crashes with no error message. I'm running this on a beefy computer (32GB RAM, 12th Gen Intel(R) Core(TM)...