exo icon indicating copy to clipboard operation
exo copied to clipboard

Hello? How to download and use the deepseek model?

Open a506488043 opened this issue 10 months ago • 16 comments

The documentation doesn't say how to download the model;

a506488043 avatar Feb 06 '25 12:02 a506488043

I have this question too

xwhy1111 avatar Feb 06 '25 15:02 xwhy1111

Hi all, actually you may find that there are several options to be downloaded for different models including all open-source deepseek models in the provided user interface if you successfully have been completing the implementation of exo and being able to enter the localhost 52415. Alternatively, just directly find and download models by Huggingface then move them to the default exo folder for downloaded models.

xuanzhec avatar Feb 08 '25 07:02 xuanzhec

Hi all, actually you may find that there are several options to be downloaded for different models including all open-source deepseek models in the provided user interface if you successfully have been completing the implementation of exo and being able to enter the localhost 52415. Alternatively, just directly find and download models by Huggingface then move them to the default exo folder for downloaded models.

There is only LLAMA, no deepseek, and it cannot be downloaded. If you have downloaded deepseek, how do you run it? Image

a506488043 avatar Feb 08 '25 11:02 a506488043

I have downloaded the model, how do I load and use the model?

Image The page does not show DeepSeek-R1-Distill-Llama-8B; unsloth --llama-3.2-1B-Instruct has always been in the download state;

a506488043 avatar Feb 08 '25 14:02 a506488043

Hi all, actually you may find that there are several options to be downloaded for different models including all open-source deepseek models in the provided user interface if you successfully have been completing the implementation of exo and being able to enter the localhost 52415. Alternatively, just directly find and download models by Huggingface then move them to the default exo folder for downloaded models.

There is only LLAMA, no deepseek, and it cannot be downloaded. If you have downloaded deepseek, how do you run it? Image

Image hi, it shall be beyond the normal because the deepseek shall have been already waiting for installation in my case if installed exo smoothly as well. Also, if you have downloaded one or more models in the exo user interface. just choose and click it, which means you are using this model you selected. Moreover, you might enter eg. HF_ENDPOINT=https://hf-mirror.com exo to set the mirror path of huggingface depending on your actual location according to the official exo readme instruction.

xuanzhec avatar Feb 10 '25 03:02 xuanzhec

I do not know much information about the linux case since it seems that the huggingface has been providing mac os users with the mlx models of deepseek if I wanna download models in the user interface, and you might try to find the models directly from the huggingface, visit the official channel of deepseek. eg. https://huggingface.co/deepseek-ai/DeepSeek-R1.

xuanzhec avatar Feb 10 '25 03:02 xuanzhec

I do not know much information about the linux case since it seems that the huggingface has been providing mac os users with the mlx models of deepseek if I wanna download models in the user interface, and you might try to find the models directly from the huggingface, visit the official channel of deepseek. eg. https://huggingface.co/deepseek-ai/DeepSeek-R1.

Hello, I have downloaded deepseek through https://huggingface.co/; I put it in the /root/.cache/exo/downloads directory; I don't know how to do it later, how to make the Deepseek model display on the front-end page; Or is it that the model I downloaded is in the wrong place?

a506488043 avatar Feb 10 '25 03:02 a506488043

I do not know much information about the linux case since it seems that the huggingface has been providing mac os users with the mlx models of deepseek if I wanna download models in the user interface, and you might try to find the models directly from the huggingface, visit the official channel of deepseek. eg. https://huggingface.co/deepseek-ai/DeepSeek-R1.

Why not just use ollama?

a506488043 avatar Feb 10 '25 04:02 a506488043

I do not know much information about the linux case since it seems that the huggingface has been providing mac os users with the mlx models of deepseek if I wanna download models in the user interface, and you might try to find the models directly from the huggingface, visit the official channel of deepseek. eg. https://huggingface.co/deepseek-ai/DeepSeek-R1.

Why not just use ollama?

hi did you try to run. eg exo run xxxxxx to implement your downloaded deepseek model?

xuanzhec avatar Feb 10 '25 05:02 xuanzhec

There is only LLAMA, no deepseek, and it cannot be downloaded. If you have downloaded deepseek, how do you run it? Image

I am also running into this issue. Since it only showed llama models I used the following commands to try to get it to work:

huggingface-cli download deepseek-ai/DeepSeek-R1-Distill-Qwen-1.5B
mv .cache/huggingface/hub/models--deepseek-ai--DeepSeek-R1-Distill-Qwen-1.5B/ .cache/exo/downloads/

I've installed this on two different (Linux) computers each not able to see deepseek when running on their own or together

marzvrover avatar Feb 10 '25 09:02 marzvrover

Hello everyone, does anyone solve this problem?Perhaps only the mac version has the deepseek option?

chiyuanbo avatar Feb 13 '25 04:02 chiyuanbo

Refer to https://github.com/exo-explore/exo/issues/697, I also use archlinux, the same result I didn't see other models except llama.

nuaadupuliu avatar Feb 13 '25 04:02 nuaadupuliu

Image tinygrad only support llama model you can wait for llama.cpp

belog2867 avatar Feb 13 '25 12:02 belog2867

I do not know much information about the linux case since it seems that the huggingface has been providing mac os users with the mlx models of deepseek if I wanna download models in the user interface, and you might try to find the models directly from the huggingface, visit the official channel of deepseek. eg. https://huggingface.co/deepseek-ai/DeepSeek-R1.

Why not just use ollama?

hi did you try to run. eg exo run xxxxxx to implement your downloaded deepseek model?

@xuanzhec I tried with:

exo run deepseek-ai/DeepSeek-R1-Distill-Qwen-1.5B

Error: Unsupported model 'deepseek-ai/DeepSeek-R1-Distill-Qwen-1.5B' for inference engine

borch84 avatar Feb 26 '25 03:02 borch84

@borch84 hi currently exo has been using their pretty names for all supported models, please find and read it exo/models.py. Besides, I do not know whether you are trying to run the model using mlx or tinygrad on mac or linux.

xuanzhec avatar Feb 26 '25 08:02 xuanzhec

Hi @xuanzhec I am using linux, I've taken one of the models listed in models.py, I tried with deepseek-coder-v2-lite because my machine is small, but the model is not supported.

$ exo run deepseek-coder-v2-lite Error: Unsupported model 'deepseek-coder-v2-lite' for inference engine TinygradDynamicShardInferenceEngine

borch84 avatar Feb 26 '25 14:02 borch84