Intelli icon indicating copy to clipboard operation
Intelli copied to clipboard

Add Offline DeepSeek Model

Open Barqawiz opened this issue 10 months ago • 21 comments

Implement an offline DeepSeek model loader for inference that:

  • Loads DeepSeek models directly from the official host (HuggingFace).
  • Supports both full and quantized versions (if available).
  • Implement memory optimization techniques similar to llama.cpp or ollama.

Expected Deliverables:

  • Develop a wrapper for the DeepSeek model.
  • Create a folder under model/deepseek for helpers and extended functions used by the wrapper.
  • Write a test case to load the model, to be placed in the test directory.
  • Mention which pip used for the code like torch. Don't use higher level modules like transformers.

Notes:

  • For reference, check the current (non-optimized) Python code from DeepSeek repo: https://github.com/deepseek-ai/DeepSeek-V3/tree/main/inference
  • llama.cpp is a reference for optimized model loading techniques.
  • Intelli should provide easy way to load the model.
  • Don't use high level pip module like transformers (intellinode provide light weight integration for AI agents).
  • You can use torch, tensorflow, keras, safetensors, triton, etc. (these modules recommended)

Barqawiz avatar Jan 25 '25 23:01 Barqawiz

/bounty $800

intelligentnode avatar Feb 04 '25 13:02 intelligentnode

💎 $800 bounty • IntelliNode

Steps to solve:

  1. Start working: Comment /attempt #82 with your implementation plan
  2. Submit work: Create a pull request including /claim #82 in the PR body to claim the bounty
  3. Receive payment: 100% of the bounty is received 2-5 days post-reward. Make sure you are eligible for payouts

❗ Important guidelines:

  • To claim a bounty, you need to provide a short demo video of your changes in your pull request
  • If anything is unclear, ask for clarification before starting as this will help avoid potential rework
  • Low quality AI PRs will not receive review and will be closed
  • Do not ask to be assigned unless you've contributed before

Thank you for contributing to intelligentnode/Intelli!

Attempt Started (UTC) Solution Actions
🟢 @Kunal-Darekar Mar 03, 2025, 07:10:29 PM #95 Reward
🟢 @onyedikachi-david Feb 04, 2025, 05:57:41 PM WIP
🟢 @Rushi774545 Mar 11, 2025, 10:27:43 AM WIP
🟢 @varshith257 May 12, 2025, 02:33:49 PM #119 Reward
🟢 @Enity300 Feb 19, 2025, 11:36:41 AM WIP
🟢 @RaghavArora14 Feb 21, 2025, 09:41:03 PM #94 Reward

algora-pbc[bot] avatar Feb 04 '25 13:02 algora-pbc[bot]

/attempt #82

Algora profile Completed bounties Tech Active attempts Options
@onyedikachi-david 17 bounties from 7 projects
TypeScript, Python,
Rust & more
Cancel attempt

onyedikachi-david avatar Feb 04 '25 17:02 onyedikachi-david

@Barqawiz Have you considered leveraging Ollama for loading and running the DeepSeek models instead?

By building a lightweight wrapper to integrate with Ollama, we could create an API for AI agents to interact with local models. Not only supporting DeepSeek but also any other models compatible with Ollama. This would simplify development, reduce overhead, and ensure we're tapping into the optimizations that Ollama offers for efficient model inference and memory management. I think a custom model loader would be quite difficult to maintain later on

This approach would make it easier to scale and support a broader range of models in the future as well

oliverqx avatar Feb 04 '25 19:02 oliverqx

Good question @oliverqx. Let me explain the rationale behind using an offline model and why I'm avoiding Ollama or similar high-level modules.

Intelli can build a graph of collaborating agents using the flow concept: Sequence Flow Documentation.

I've managed to integrate multiple offline models into the flow using the KerasWrapper, which provides a convenient way to load several offline models, such as Llama, Mistral, and others: KerasWrapper class.

However, Keras does not currently support DeepSeek, and adding that functionality will likely take some time from Keras team. As a result, my current focus is on DeepSeek.

I avoid using Ollama because I want to minimize external dependencies. I looked into Ollama as a high-level library, and integrating it would introduce additional unnecessary modules - the same thing with HF Transformers.

You can influence how Ollama uses modules like Torch, optimization libraries, or use Safetensors from HuggingFace. These lower-level modules are accepted. I'm happy to credit their work if you influence their approaches, but I prefer not to have Ollama as a dependency for running the flow.

Feel free to use o1 or o3 or deepseek to write any part the code.

intelligentnode avatar Feb 04 '25 19:02 intelligentnode

@intelligentnode are there any specific DeepSeek variants you'd prefer?

oliverqx avatar Feb 04 '25 20:02 oliverqx

You can use the official ones from R1 or any quantized variant:

DeepSeek-R1 Models

Model #Total Params #Activated Params Context Length Download
DeepSeek-R1 671B 37B 128K 🤗 HuggingFace

DeepSeek-R1-Distill Models

Model Base Model Download
DeepSeek-R1-Distill-Qwen-1.5B Qwen2.5-Math-1.5B 🤗 HuggingFace
DeepSeek-R1-Distill-Qwen-7B Qwen2.5-Math-7B 🤗 HuggingFace
DeepSeek-R1-Distill-Llama-8B Llama-3.1-8B 🤗 HuggingFace
DeepSeek-R1-Distill-Qwen-14B Qwen2.5-14B 🤗 HuggingFace
DeepSeek-R1-Distill-Qwen-32B Qwen2.5-32B 🤗 HuggingFace
DeepSeek-R1-Distill-Llama-70B Llama-3.3-70B-Instruct 🤗 HuggingFace

In general it is going to be expensive to run DeepSeek-R1. But you can test the code on the 7B or 8B models to be accept as attempt. Also if you know a hosted quantized version from R1 you can test on it.

intelligentnode avatar Feb 04 '25 20:02 intelligentnode

@intelligentnode for tokenization, is it alright to use AutoTokenizer from transformers?

i know:

Don't use high level pip module like transformers

was wondering if that also applies to tokenization though

oliverqx avatar Feb 04 '25 22:02 oliverqx

If it requires installing Transformers to use it, then no. If it is published as an independent module with a lightweight installation via pip, then yes. You can implement a lightweight version of Tokenizer, if an independent one is not available.

intelligentnode avatar Feb 04 '25 23:02 intelligentnode

/attempt #82

Options

Enity300 avatar Feb 19 '25 11:02 Enity300

@Enity300 would you like to work on this together? its quite the task i think

oliverqx avatar Feb 19 '25 16:02 oliverqx

@oliverqx It is good that you mentioned this. With your collaboration, I can assist you with relevant chunks of the code that require stitching, and testing.

Send me your email using the below form, and will organize a call: https://www.intellinode.ai/contact

Mention you are from Github.

intelligentnode avatar Feb 19 '25 16:02 intelligentnode

@intelligentnode

so far ive studied the deep seek model repo, this week ive been studying llama.cpp, sometime this weekend i think i can open a PR.

until then, work is very theoretical. Will definitely hit you up once i get a solid grasp on what optimization means

im not an ai dev so this is a lot of new info, this why its taking so long. I'm confident i can come to a solution sometime mid march

oliverqx avatar Feb 19 '25 17:02 oliverqx

/attempt 82

Options

RaghavArora14 avatar Feb 21 '25 21:02 RaghavArora14

💡 @RaghavArora14 submitted a pull request that claims the bounty. You can visit your bounty board to reward.

algora-pbc[bot] avatar Feb 21 '25 22:02 algora-pbc[bot]

Hi @intelligentnode I recently submitted a PR implementing this bounty on behalf of @Enity300 please review that too.

RaghavArora14 avatar Feb 26 '25 17:02 RaghavArora14

💡 @Kunal-Darekar submitted a pull request that claims the bounty. You can visit your bounty board to reward.

algora-pbc[bot] avatar Mar 03 '25 19:03 algora-pbc[bot]

/attempt #95 Hi @intelligentnode,

I've submitted PR #95 implementing the offline DeepSeek model loader as requested in this bounty.

The implementation includes:

  • Direct loading from HuggingFace without high-level dependencies
  • Support for all DeepSeek-R1 models and distilled variants
  • Memory optimization with quantization support
  • Comprehensive test coverage

I'd appreciate your review when you have a chance. Please let me know if you need any clarification or have questions about the implementation.

Thank you!

Options

Kunal-Darekar avatar Mar 03 '25 20:03 Kunal-Darekar

/attempt #82

Options

Rushi774545 avatar Mar 11 '25 10:03 Rushi774545

This task reward is time-sensitive, due to not having a complete solution until this time I have to adjust the reward.

intelligentnode avatar May 04 '25 21:05 intelligentnode

/bounty $300

intelligentnode avatar May 05 '25 09:05 intelligentnode

Cancelling as this issue is time-sensitive and not delivered on time.

intelligentnode avatar May 31 '25 20:05 intelligentnode