kubectl-ai icon indicating copy to clipboard operation
kubectl-ai copied to clipboard

Discussion: Remote Multi-User MCP Server

Open RGanor opened this issue 7 months ago • 4 comments

Hey everyone,

I'd like to propose a potential new feature and gather feedback on its feasibility and community interest: running kubectl-ai as a remote, centralized server.

Motivation & Use Case:

The core idea is to deploy kubectl-ai centrally, perhaps within a cluster or on a dedicated machine. Multiple Kubernetes users could then interact with this single instance via a network connection, rather than each running kubectl-ai locally.

Each user would authenticate their requests to the central kubectl-ai server by passing their specific Kubernetes access credentials (a bearer token) in the Authorization header. The server would then use these credentials to interact with the target Kubernetes cluster on behalf of the requesting user.

Technical Approach & Proof of Concept:

To enable network access, the communication mechanism would need to change from the current stdio model to SSE. I've implemented a basic proof-of-concept locally to adapt the server to use SSE and extract authentication information from requests:

func authFromRequest(ctx context.Context, r *http.Request) context.Context {
	return context.WithValue(ctx, authKey{}, r.Header.Get("Authorization")
}

func (s *kubectlMCPServer) Serve(ctx context.Context) error {
	var transport = "SSE" // TODO: Pass a command line argument
	if transport == "SSE" {
		sseServer := server.NewSSEServer(s.server, server.WithBaseURL("http://localhost:8080"), server.WithSSEContextFunc(authFromRequest))
		log.Printf("SSE server listening on :8080")
		if err := sseServer.Start(":8080"); err != nil {
			log.Fatalf("Server error: %v", err)
		}
	} else { // Backward compatibility with stdio
		if err := server.ServeStdio(s.server); err != nil {
			log.Fatalf("Server error: %v", err)
		}
	}
	return server.ServeStdio(s.server)
}

(Note: Example code, needs adapting for the target project.)

Discussion Points / Open Questions:

  1. Handling Multi-User Authorization & Kubernetes Interaction: This is the core challenge. The current approach relies on bashtool executing kubectl, which reads the local user's ~/.kubeconfig. For a remote server handling requests from multiple users:

    • How can the server securely use the credentials (token from the Authorization header) provided by the remote user to execute kubectl commands against the target cluster as that user?
    • Option A: Refactor to use Kubernetes SDK: Directly using the Go client library (client-go) would allow passing impersonation or token information programmatically when creating the Kubernetes client. This offers fine-grained control but likely requires significant refactoring of the parts currently shelling out to kubectl.
    • Option B: Dynamic Environment/Configuration for kubectl: Could the server dynamically set environment variables just for the duration of the kubectl command spawned by bashtool? What are the security and cleanup implications? Is this reliable?
    • Option C: Other methods? Are there alternative approaches to achieve this impersonation or per-request credential handling when shelling out?
  2. Community Interest & Use Cases:

  • Do others in the community see potential value in this remote server functionality?
  • If you think this could be useful, how do you imagine you or others might use it? What kind of scenarios come to mind?

Looking forward to hearing your thoughts, feedback, concerns, and alternative ideas on this proposal!

RGanor avatar May 12 '25 17:05 RGanor

Thanks @RGanor . Ack. Will share my thoughts soon.

droot avatar May 12 '25 22:05 droot

I had a similar idea too, where kubectl-ai was installed singularly in a jumpbox and that jumbox would access n clusters and troubleshoot on them. As this is a more quite common scenario for security reasons.

Although the implementation could be somewhat complex, due to the amount of cloud providers out there, imho it's a quite nice feature to be considered

zvdy avatar May 13 '25 20:05 zvdy

Yes, I love the idea and in my opinion, that is the future. I think of that as remote kubectl-ai or kubectl-ai in the cloud (cloud could be a remote machine, k8s, or anything). kubectl-ai actually get split in to client and server where client could be the CLI or web UI (or even desktop).

Keeping the challenges aside for a moment, the exciting part is the use-cases it enables around users being able to collab (imagine user-A sharing a troubleshooting session with another user). In an enterprise settings, platform team can pre-package prompts and tools so that clients (UI/CLI) don't have to worry about those configuration, users get access to shared prompt (pre-made prompts) and pre-packaged tools out-of the box. And most importantly, it enables asynchronous workflows and pro-active workflows.

TL;DR I am very excited about the overall idea. Let's keep the discussion going and start thinking about what would be crawl, walk and run steps to make this a reality.

droot avatar May 16 '25 19:05 droot

where kubectl-ai was installed singularly in a jumpbox and that jumbox would access n clusters and troubleshoot on them. As this is a more quite common scenario for security reasons.

I’m actually not a fan of jumpbox or jumphost solutions, especially due to security concerns. What if we run kubectl-ai as a controller inside the cluster and use in-cluster authentication? This way, we can offload the responsibility and improve security.

Meanwhile, the local kubectl-ai can continue to use ~/.kubeconfig as usual, but now it would be able to access multiple clusters simultaneously, thanks to the in-cluster kubectl-ai controller.

sequenceDiagram
    participant User
    participant LocalKubectlAI as Local kubectl-ai
    participant InClusterController as In-Cluster kubectl-ai Controller
    participant K8sAPI as Kubernetes API Server

    User->>LocalKubectlAI: Run kubectl-ai command (e.g., get pods)
    LocalKubectlAI->>InClusterController: Send request (API/command) to in-cluster controller (via network)
    InClusterController->>K8sAPI: Authenticate using in-cluster credentials (ServiceAccount)
    InClusterController->>K8sAPI: Execute requested operation (e.g., get pods)
    K8sAPI-->>InClusterController: Return operation result (e.g., pod list)
    InClusterController-->>LocalKubectlAI: Send result back
    LocalKubectlAI-->>User: Display result

    Note over LocalKubectlAI,InClusterController: Local kubectl-ai can access multiple clusters by sending requests to different in-cluster controllers.
    Note over InClusterController,K8sAPI: In-cluster authentication improves security by avoiding local kubeconfig exposure.

tuannvm avatar May 17 '25 06:05 tuannvm