llm-ops topic
OpenLLM
Run any open-source LLMs, such as Llama 3.1, Gemma, as OpenAI compatible API endpoint in the cloud.
aiconfig
AIConfig is a config-based framework to build generative AI applications.
friendli-client
Friendli: the fastest serving engine for generative AI
ask-astro
An end-to-end LLM reference implementation providing a Q&A interface for Airflow and Astronomer
athina-evals
Python SDK for running evaluations on LLM generated responses
cognita
RAG (Retrieval Augmented Generation) Framework for building modular, open source applications for production by TrueFoundry
julep
Deploy serverless AI workflows at scale. Firebase for AI agents
JamAIBase
The collaborative spreadsheet for AI. Chain cells into powerful pipelines, experiment with prompts and models, and evaluate LLM responses in real-time. Work together seamlessly to build and iterate on...
konduktor
cluster/scheduler health monitoring for GPU jobs on k8s