KoboldAI-Client
KoboldAI-Client copied to clipboard
MLC-LLM Integration?
Perhaps it would be a good idea to add support for the new MLC-AI team project https://github.com/mlc-ai/mlc-llm in the future to run on any graphics cards that support the Vulcan API? Just like you added llama.cpp support in the past. For example, I have an RX 570 graphics card that has Vulcan support, but does not have support for current ROCm versions, and has 8 GB VRAM, so for people in a similar situation, this would be very important and useful.