Hello? How to download and use the deepseek model?
The documentation doesn't say how to download the model;
I have this question too
Hi all, actually you may find that there are several options to be downloaded for different models including all open-source deepseek models in the provided user interface if you successfully have been completing the implementation of exo and being able to enter the localhost 52415. Alternatively, just directly find and download models by Huggingface then move them to the default exo folder for downloaded models.
Hi all, actually you may find that there are several options to be downloaded for different models including all open-source deepseek models in the provided user interface if you successfully have been completing the implementation of exo and being able to enter the localhost 52415. Alternatively, just directly find and download models by Huggingface then move them to the default exo folder for downloaded models.
There is only LLAMA, no deepseek, and it cannot be downloaded.
If you have downloaded deepseek, how do you run it?
I have downloaded the model, how do I load and use the model?
The page does not show DeepSeek-R1-Distill-Llama-8B; unsloth --llama-3.2-1B-Instruct has always been in the download state;
Hi all, actually you may find that there are several options to be downloaded for different models including all open-source deepseek models in the provided user interface if you successfully have been completing the implementation of exo and being able to enter the localhost 52415. Alternatively, just directly find and download models by Huggingface then move them to the default exo folder for downloaded models.
There is only LLAMA, no deepseek, and it cannot be downloaded. If you have downloaded deepseek, how do you run it?
hi, it shall be beyond the normal because the deepseek shall have been already waiting for installation in my case if installed exo smoothly as well.
Also, if you have downloaded one or more models in the exo user interface. just choose and click it, which means you are using this model you selected. Moreover, you might enter eg.
HF_ENDPOINT=https://hf-mirror.com exo to set the mirror path of huggingface depending on your actual location according to the official exo readme instruction.
I do not know much information about the linux case since it seems that the huggingface has been providing mac os users with the mlx models of deepseek if I wanna download models in the user interface, and you might try to find the models directly from the huggingface, visit the official channel of deepseek. eg. https://huggingface.co/deepseek-ai/DeepSeek-R1.
I do not know much information about the linux case since it seems that the huggingface has been providing mac os users with the mlx models of deepseek if I wanna download models in the user interface, and you might try to find the models directly from the huggingface, visit the official channel of deepseek. eg.
https://huggingface.co/deepseek-ai/DeepSeek-R1.
Hello, I have downloaded deepseek through https://huggingface.co/; I put it in the /root/.cache/exo/downloads directory; I don't know how to do it later, how to make the Deepseek model display on the front-end page; Or is it that the model I downloaded is in the wrong place?
I do not know much information about the linux case since it seems that the huggingface has been providing mac os users with the mlx models of deepseek if I wanna download models in the user interface, and you might try to find the models directly from the huggingface, visit the official channel of deepseek. eg.
https://huggingface.co/deepseek-ai/DeepSeek-R1.
Why not just use ollama?
I do not know much information about the linux case since it seems that the huggingface has been providing mac os users with the mlx models of deepseek if I wanna download models in the user interface, and you might try to find the models directly from the huggingface, visit the official channel of deepseek. eg.
https://huggingface.co/deepseek-ai/DeepSeek-R1.Why not just use ollama?
hi did you try to run. eg exo run xxxxxx to implement your downloaded deepseek model?
There is only LLAMA, no deepseek, and it cannot be downloaded. If you have downloaded deepseek, how do you run it?
I am also running into this issue. Since it only showed llama models I used the following commands to try to get it to work:
huggingface-cli download deepseek-ai/DeepSeek-R1-Distill-Qwen-1.5B
mv .cache/huggingface/hub/models--deepseek-ai--DeepSeek-R1-Distill-Qwen-1.5B/ .cache/exo/downloads/
I've installed this on two different (Linux) computers each not able to see deepseek when running on their own or together
Hello everyone, does anyone solve this problem?Perhaps only the mac version has the deepseek option?
Refer to https://github.com/exo-explore/exo/issues/697, I also use archlinux, the same result I didn't see other models except llama.
tinygrad only support llama model
you can wait for llama.cpp
I do not know much information about the linux case since it seems that the huggingface has been providing mac os users with the mlx models of deepseek if I wanna download models in the user interface, and you might try to find the models directly from the huggingface, visit the official channel of deepseek. eg.
https://huggingface.co/deepseek-ai/DeepSeek-R1.Why not just use ollama?
hi did you try to run. eg
exo run xxxxxxto implement your downloaded deepseek model?
@xuanzhec I tried with:
exo run deepseek-ai/DeepSeek-R1-Distill-Qwen-1.5B
Error: Unsupported model 'deepseek-ai/DeepSeek-R1-Distill-Qwen-1.5B' for inference engine
@borch84 hi currently exo has been using their pretty names for all supported models, please find and read it exo/models.py. Besides, I do not know whether you are trying to run the model using mlx or tinygrad on mac or linux.
Hi @xuanzhec I am using linux, I've taken one of the models listed in models.py, I tried with deepseek-coder-v2-lite because my machine is small, but the model is not supported.
$ exo run deepseek-coder-v2-lite Error: Unsupported model 'deepseek-coder-v2-lite' for inference engine TinygradDynamicShardInferenceEngine