OnnxStack icon indicating copy to clipboard operation
OnnxStack copied to clipboard

TextGeneration Pipeline

Open saddam213 opened this issue 1 year ago • 2 comments

Support for TextGeneration ONNX models

Initial support will be using OnnxRuntime-GenAI onnxruntime-genai

TODO:

  • [x] TextGeneration Project
  • [ ] Basic stateless pipeline
  • [ ] CUDA and CPU support
  • [ ] Code Examples

saddam213 avatar Mar 20 '24 01:03 saddam213

Looks cool. How is the text generation speed for onnx models compared to llamasharp for a Mistral 7B v0.2 Instruct model for example?

AshD avatar Apr 05 '24 14:04 AshD

Looks cool. How is the text generation speed for onnx models compared to llamasharp for a Mistral 7B v0.2 Instruct model for example?

Im not too sure, I have not been able to get LLamaSharp working in Web or WPF since version 0.8. so this is my first attempt using another LLM base library, this one is still very new but its a million times less complicated than llama.cpp

saddam213 avatar Apr 05 '24 18:04 saddam213