Moses Hu

Results 22 comments of Moses Hu

> did you solve it ? try torch.load("",map_location="cpu") and save it,convert gpu to cpu

> ```shell > make libllama.so > ``` it gives me error" LLAMA_ASSERT: llama.cpp:1800: !!kv_self.ctx",how to solve it? the command is ` python -m llama_cpp.server --model model/ggml-model-f16-daogou.bin --port 7777 --host 127.0.0.1...