MGM
MGM copied to clipboard
LMDeploy is gonna support the inference of MiniGemini :rocket:
LMDeploy, as an AI deployment platform supporting multiple backend services, has always been committed to providing fast and stable AI model deployment services.
Now, it supports accelerating the inference and service of MiniGemini Llama models. Check https://github.com/InternLM/lmdeploy/pull/1438
Hi @AllentDan , thanks for your great work! It's really cool. We are happy to see it is supported by more repos.