llm-serving topic
ray
Ray is a unified framework for scaling AI and Python applications. Ray consists of a core distributed runtime and a set of AI Libraries for accelerating ML workloads.
mosec
A high-performance ML model serving framework, offers dynamic batching and CPU/GPU pipelines to fully exploit your compute machine
skypilot
SkyPilot: Run LLMs, AI, and Batch jobs on any cloud. Get maximum savings, highest GPU availability, and managed execution—all with a simple interface.
ray-llm
RayLLM - LLMs on Ray
OpenLLM
Run any open-source LLMs, such as Llama 2, Mistral, as OpenAI compatible API endpoint in the cloud.
vllm
A high-throughput and memory-efficient inference and serving engine for LLMs
sugarcane-ai
npm like package ecosystem for Prompts 🤖
superduperdb
🔮 SuperDuperDB: Bring AI to your database! Build, deploy and manage any AI application directly with your existing data infrastructure, without moving your data. Including streaming inference, scalab...
ialacol
🪶 Lightweight OpenAI drop-in replacement for Kubernetes
friendli-client
Friendli: the fastest serving engine for generative AI