taozhiyuai
taozhiyuai
FROM /Users/taozhiyu/Downloads/M-GGUF/Meta-Llama-3-120B-Instruct-GGUF/Q5KM/Meta-Llama-3-120B-Instruct.Q5_K_M.gguf TEMPLATE """{{ if .System }}system {{ .System }}{{ end }}{{ if .Prompt }}user {{ .Prompt }}{{ end }}assistant {{ .Response }}""" PARAMETER num_keep 24 PARAMETER stop "" PARAMETER...
anyone know how to solve this issue?
> Llama 3 ChatQA is a very wierd model which is optimised for RAG. It's still not 100% supported in ollama 70B works. only 8B not. even I try 8B...
same for other non-english characters. :(
> > This is my `Modelfile` > > ``` > > FROM ./q4_K_M.gguf > > TEMPLATE """{{ if .System }}System: {{ .System }} > > > > {{ end }}{{...
> Found the available .GGUF files in the LM Studio community: [Llama3-ChatQA-1.5-8B-GGUF](https://huggingface.co/lmstudio-community/Llama3-ChatQA-1.5-8B-GGUF) > > I tried to write a `Modelfile` to install it in Ollama, and it doesn't run very...
> i think this is a network issue. try `export HF_ENDPOINT=https://hf-mirror.com` and see if it helps. the model should be saved in your hf cache, and then you can load...
> [#573 (comment)](https://github.com/OpenDevin/OpenDevin/issues/573#issuecomment-2031923106) @SmartManoj thanks for help. but not work for me. (opendevin) taozhiyu@603e5f4a42f1 opendevin % wget https://hf-mirror.com/BAAI/bge-small-en-v1.5/raw/main/1_Pooling/config.json -P /tmp/llama_index/models--BAAI--bge-small-en-v1.5/snapshots/5c38ec7c405ec4b44b94cc5a9bb96e735b38267a/1_Pooling/ --2024-04-16 22:09:40-- https://hf-mirror.com/BAAI/bge-small-en-v1.5/raw/main/1_Pooling/config.json 正在解析主机 hf-mirror.com (hf-mirror.com)... 160.16.199.204, 133.242.169.68, 153.121.57.40 正在连接...
and I do not find this file in this path /tmp/llama_index/models--BAAI--bge-small-en-v1.5/snapshots/5c38ec7c405ec4b44b94cc5a9bb96e735b38267a/1_Pooling/config.json.1
输入地址,按回车,退了!