CSGHub
CSGHub copied to clipboard
How to deploy deepseek-r1:1.5b or any LLM model using csghub on a linux serve for demo purpose?
Hi Team,
I would like to deploy deepseek-r1:1.5b or any other LLM model on a Linux server using CSGHub for demo purposes.
Could you please provide a step-by-step tutorial or a video guide? A tutorial in Chinese is also acceptable.
Looking forward to your guidance!
CSGHub provide several different way to build llm app. You could use inference endpoint to hold llm server or build gradio/streamlint app to host a demo app. You could check our offical doc for inference https://opencsg.com/docs/inferencefinetune/inference_finetune_intro currently hf-tgi/vllm/sglang are supported, llamacpp/ollama is in the plan. If you are familar with gradio or streamlit, you could try space funtion as well.
For deepseek r1 model, as i know, to run r1 or r1_zero you would have at least 2*8 h100 gpu or more. Personally I would suggest you to try ds-distill-qwen or ds-distill-llama which could be loaded with tgi or vllm at a decent gpu requirment. For even lower gpu(such as single gpu), you could use vllm to serve awq quant version . All those funtions are supportted now. If you meet any problem, feel free to comment😄
根据提供的文档内容,我无法回答您的问题。这些文档仅包含StarShip版本更新的功能说明,没有涉及项目社区活跃度、代码质量评估或是否存在虚假繁荣现象的相关信息。
如果您对StarShip项目有具体的技术问题或功能疑问,可以访问 https://opencsg.com/docs 获取更多信息,技术支持团队将会介入协助。
根据检索到的内容,无法回答您关于该项目是否处于无维护状态或进行虚假星标操作的问题。
您可以访问 https://opencsg.com/docs 获取更多信息,技术支持将会介入处理。