NeMo icon indicating copy to clipboard operation
NeMo copied to clipboard

Inflight nemo model export support

Open JimmyZhang12 opened this issue 1 year ago • 0 comments

What does this PR do ?

Adds inflight NeMo to TRTLLM v10 model conversion and engine refitting using device weights

Collection: [Note which collection this PR will affect]

Changelog

  • Add specific line by line info of high level changes in this PR.

Usage

  • You can potentially add a usage example below
# Add a code snippet demonstrating how to use this 

GitHub Actions CI

The Jenkins CI system has been replaced by GitHub Actions self-hosted runners.

The GitHub Actions CI will run automatically when the "Run CICD" label is added to the PR. To re-run CI remove and add the label again. To run CI on an untrusted fork, a NeMo user with write access must first click "Approve and run".

Before your PR is "Ready for review"

Pre checks:

  • [ ] Make sure you read and followed Contributor guidelines
  • [ ] Did you write any new necessary tests?
  • [ ] Did you add or update any necessary documentation?
  • [ ] Does the PR affect components that are optional to install? (Ex: Numba, Pynini, Apex etc)
    • [ ] Reviewer: Does the PR have correct import guards for all optional libraries?

PR Type:

  • [ ] New Feature
  • [ ] Bugfix
  • [ ] Documentation

If you haven't finished some of the above items you can still open "Draft" PR.

Who can review?

Anyone in the community is free to review the PR once the checks have passed. Contributor guidelines contains specific people who can review PRs to various areas.

Additional Information

  • Related to # (issue)

JimmyZhang12 avatar Jun 24 '24 20:06 JimmyZhang12

同样需要兼容OpenAI API的模式,Ollama太弱了

Logistic98 avatar Jun 14 '24 03:06 Logistic98

In fact, in RAGFlow system , User Setting-Model Providers-Xinference not only supports models deployed by Xinference, but also any OpenAI-API-compatible model. So you can use any OpenAI-API-compatible model by Model Providers-Xinference.

aopstudio avatar Jun 24 '24 08:06 aopstudio

In fact, in RAGFlow system , User Setting-Model Providers-Xinference not only supports models deployed by Xinference, but also any OpenAI-API-compatible model. So you can use any OpenAI-API-compatible model by Model Providers-Xinference.

This should be added in the README or the documentation somewhere, that's a big deal!

lrq3000 avatar Dec 30 '24 19:12 lrq3000

@edisonzf2020 Thanks for your suggestion, and apologies for the late reply! ⏳🙏

Our product now supports OpenAI-compatible API access for other LLMs, and this is also documented in our technical guides. 📘🔌 This should help resolve your issue.

Please feel free to close this feature. If it remains open, we’ll include it in our upcoming round of issue cleanups. 🧹

Thanks again for your helpful feedback — we truly appreciate your contributions! 💡🚀

which-W avatar May 12 '25 03:05 which-W