llm-chain
llm-chain copied to clipboard
add support for Mistral using TGI / vllm / candle
Hi guys love your project
I was wondering if you can add support to mistral via:
for use it as endpoints also they have active support to new llm Architectures as Mistral
Hey sounds like a very good idea :)
If anyone wants to add this it would be a most welcome contribution.
Llama+Mistral+Zephyr and GPU acceleration in only ~450 lines using candle. https://github.com/huggingface/candle/blob/main/candle-examples/examples/quantized/main.rs
If Mistral support is added with candle it could be fairly trivial to also support Llama and Zephyr.
I have some experience with Rust, although my familiarity with LLMS is somewhat limited. can take on this challenge, as it would mark my initial contribution to the LLM-chain.
Sounds like a great idea :)