Kubernetes reference implementation
LLMWare provides several Docker implementation scripts and a devcontainer reference script.
We would welcome contributions from Kubernetes experts to provide a reference Kubernetes configuration and 'fast start' script to deploy llmware in a Kubernetes cluster as well as advising us on additional steps and capabilities that will facilitate Kubernetes scalable deployments.
This is a great first issue if you are an expert in Kubernetes and just starting to learn llmware.
@doberst I'm planning to work on the Kubernetes deployment for LLMWare. Given the multi-container setup and the complexity of the project, I have a few questions to ensure the solution meets your expectations. Since this is a "good first issue" and for a "reference" Kubernetes configuration, I want to make sure we align on the requirements.
The project would involve deploying multiple services (MongoDB, Milvus, Neo4j, Pgvector, Qdrant, Redis Stack) and managing inter-service communication, resource allocation, configuration management, external access, scalability (HPA) and monitoring/logging.
Follow-Ups:
- Scope of Initial Setup: Should the initial Kubernetes configuration focus on a basic setup with Deployments and Services?
- Resource Requirements: Are there any specific resource requirements (CPU, memory) that should be considered for the deployments to ensure optimal performance?
Thank you for your time and assistance.
@Lelin07 - I am so sorry that this message slipped through the cracks - we have been completely buried the last couple of weeks. Yes, we really appreciate your interest in this, and would welcome a contribution. In terms of scope, we would look for a basic recipe that could be the foundation for further customization depending upon a specific deployment pattern - and so we would encourage it to be more "universal" as a starting point. In terms of CPU/memory, I don't have a specific guideline - but per the comment above, I would aim for a practical basic implementation that could always be scaled up if needed. Please let me know if you have other questions/clarifications - and promise faster replies! 👍
Hey @doberst I came across the issue regarding the reference Kubernetes configuration and 'fast start' script for deploying LLMWare and am very interested in contributing. This seems like a great first step to start working on the project while applying my Kubernetes expertise.Before I begin, could you confirm if this issue is still open and if I can proceed? My plan is to create a basic, universal configuration and fast-start script, as suggested, to serve as a foundation for scalable deployments.
Please let me know if there are any additional guidelines or details to consider. I look forward to your confirmation and further instructions!
@doberst Currently I'm not working on this issue. Consider assigning @jothilal22
@jothilal22 how far did you get?
@doberst Started working on the Kubernetes deployment for LLMWare. Currently testing the Docker setup. A few clarifications needed:
Deployment approach – Should we use Helm, raw manifests, or Kustomize? Cloud considerations – Should this be cloud-agnostic or tailored for specific platforms (EKS, GKE, AKS)? Fast start script – Key features to include? (e.g., auto setup, verification steps)
My $0.02:
Deployment: Kustomize: native to K8s, no templating, declarative Cloud considerations: cloud-agnostic, absolutely agnostic as many enterprise deployments would be on-premises Fast start script: verification steps: as you are learning a new system, having good guidance when stuff goes wrong will provide a better UX.
@jothilal22 - appreciate your contribution on this topic - and encourage you to be creative and use your judgment. My overall recommendation is to set out a solid agnostic framework as a starting point which will be easy for others to build upon for specific use cases and cloud platforms - so a "fast start" script is usually a very good approach. Ideally, it will be an easy-to-start reference implementation for Kubernetes, but also relatively straightforward to supplement with more advanced capabilities.
As you start to dig in further, if you find it more useful to build for a specific cloud platform, that is OK too - I would recommend picking either AWS or Azure, and offer some comments/documentation on where we were specifically drawing on platform-specific APIs.
I have used HELM in the past, but realize that this is one of those topics that can draw a lot of opinions in the Kubernetes world, so would leave it to you and the K8 experts. Please try to keep the above points in mind.
Look forward to your contribution - and please share any questions / discussion topics along the way!