"llama.cpp error: 'error loading model architecture: unknown model architecture: 'deepseek2'
Prerequisites
Please answer the following questions for yourself before submitting an issue.
- [x] I am running the latest code. Development is very rapid so there are no tagged versions as of now.
- [x] I carefully followed the README.md.
- [ ] I searched using keywords relevant to my issue to make sure that I am creating a new issue that is not already open (or closed).
- [x] I reviewed the Discussions, and have a new bug or useful enhancement to share.
{ "title": "Failed to load model", "cause": "llama.cpp error: 'error loading model architecture: unknown model architecture: 'deepseek2''", "errorData": { "n_ctx": 8192, "n_batch": 512, "n_gpu_layers": 31 }, "data": { "memory": { "ram_capacity": "31.91 GB", "ram_unused": "26.74 GB" }, "gpu": { "gpu_names": [ "NVIDIA GeForce GTX 1080 Ti" ], "vram_recommended_capacity": "11.00 GB", "vram_unused": "9.98 GB" }, "os": { "platform": "win32", "version": "10.0.19045", "supports_avx2": true }, "app": { "version": "0.2.24", "downloadsDir": "C:\Users\ZeroCool22\.cache\lm-studio\models" }, "model": {} } }```
+1
+1
+1
I fixed it by updating Ollama (-> start the Ollama executable, right-click the icon in the notification area and click on 'restart to update'; or just reinstall Ollama).
I don't use Ollama at all, don't need that if i use LMS.
I have the same error using, LM Studio 0.2.27, so it seems that this is an LMS problem.
same issue here.. sad, wanted to test it finally, hope we will get an update for that soon!
Same
llama_model_load: error loading model: error loading model architecture: unknown model architecture: 'deepseek2'
llama_load_model_from_file: failed to load model
Traceback (most recent call last):
File "/opt/conda/lib/python3.10/runpy.py", line 196, in _run_module_as_main
return _run_code(code, main_globals, None,
File "/opt/conda/lib/python3.10/runpy.py", line 86, in _run_code
exec(code, run_globals)
File "/opt/conda/lib/python3.10/site-packages/llama_cpp/server/main.py", line 97, in
For anyone still experiencing this issue with LM Studio, I've been able to use the DeepSeek Coder V2 model by downgrading from 0.2.27 to 0.2.25.
Updating ollama is a solution for me:
curl -fsSL https://ollama.com/install.sh | sh
+1
I have the same error using, LM Studio 0.2.27, so it seems that this is an LMS problem.
Does it work in older versions? 🤔