POC request: Add mcp-client to enhance tool coverage
I think we can add mcp-client to extend coverage for tools that kubectl-ai can use. It will be cool project to explore what this integration might look like. We have rough idea, but POC is a good way to determine shape.
Any mcp enthusiasts want to take this up ?
/cc @tuannvm @selimacerbas
/cc @justinsb
hey @droot thanks for the ping, i am willing to help on this. What kind of POC idea do you have in your mind?
Main goals of the POC are:
- Investigate Go packages that we can use for mcp-client integration
- How users will configure different MCP servers
- Discovering tools exposed by the configured mcp servers and plug them into agentic loop
- Discover key challenges/problems to be solved
Potential scenarios:
- Fetch all deployments with unhealthy pods and send email with detailed report of each failed pod
- Fetch logs (and YAML snapshots) of a particular deployment and save it to google drive
- Fetch all apps updated in last 24 hrs and send health summary email
- Given an app, create an SVG with diagrams for each associated k8s resource
Thanks @droot for the ping. My common use case is to discover connection information and bridge it with MCP servers for further exploration. For example, identifying which Kafka brokers the pods are communicating with and then exploring further using kafka-mcp-server.
My experience so far has been implementing the MCP server and client using mcp-go. What are your thoughts?
My experience so far has been implementing the MCP server and client using mcp-go. What are your thoughts?
Not many people know, but kubectl-ai also has mcp server support for running kubectl commands. We used mcp-go and found it be excellent. I think Go team is releasing official mcp SDK which is based on the work of mcp-go. @selimacerbas I would suggest use mcp-go for the POC and save some time on the exploration.
My common use case is to discover connection information and bridge it with MCP servers for further exploration. For example, identifying which Kafka brokers the pods are communicating with and then exploring further using kafka-mcp-server.
can you say more about it. A concrete example with the exploration scenario.
I will start on the issue and POC steps in the upcoming days, I think going for mcp-go also on point. Thanks for the clarifications!
Thanks, @selimacerbas. I believe the official Go implementation for MCP is in the works, but it might take some time to become widely adopted. For now, relying on the existing mcp-go is a perfectly acceptable option (Github-mcp-server also uses this).
can you say more about it. A concrete example with the exploration scenario.
@droot Hope this explain the idea better
sequenceDiagram
participant User
participant kubectl-ai
participant KubeAPI
participant kafka-mcp-server
participant KafkaCluster
User->>kubectl-ai: ai inspect pod <pod-name>
kubectl-ai->>KubeAPI: Retrieve pod metadata
kubectl-ai->>KubeAPI: Get pod env, configmap, secret which potentially contains Kafka endpoint
KubeAPI-->>kubectl-ai: Return Kafka broker endpoint
kubectl-ai->>kafka-mcp-server: Query cluster info (brokers, topics, throughput)
kafka-mcp-server->>KafkaCluster: use mcp tools to interact
KafkaCluster-->>kafka-mcp-server: Return cluster metadata
kafka-mcp-server-->>kubectl-ai: Provide brokers, topics, throughput stats [citation:12]
kubectl-ai-->>User: Display pod ā Kafka endpoint mapping and cluster details
The official library is available by the way https://cs.opensource.google/go/x/tools/+/master:internal/mcp/README.md
@selimacerbas Any updates? š Iād love to follow up and discuss my use case mentioned above. With that, we could seamlessly stay within the kubectl-ai UX throughout the entire troubleshooting session.
@tuannvm I had some intense couple days, couldn't really take time to examine. I will soon return back to you.
No pressure @selimacerbas take your time.
@tuannvm it never hurts to have more than one POC š so if you have an approach you would like to try out, go for it.
2 cents: we can all learn from exploration in this space and we should not worry too much about duplication of effort at this stage.
being implemented in https://github.com/GoogleCloudPlatform/kubectl-ai/pull/270
TODO: Add streamable HTTP/SSE support in subsequent PR