Tommy Nguyen
Tommy Nguyen
Here you are ``` action: llm-chat payload: - - hello timestamp: "2025-05-12T15:17:17.214132-07:00" --- action: llm-response payload: {} timestamp: "2025-05-12T15:17:17.829812-07:00" --- action: llm-response payload: {} timestamp: "2025-05-12T15:17:17.870981-07:00" --- action: llm-response payload:...
@droot We can automate this using GitHub Actions. I have a reference implementation in [my repository](https://github.com/tuannvm/haproxy-mcp-server/blob/main/.github/workflows/update-homebrew.yml). The build step triggers [homebrew build](https://github.com/tuannvm/homebrew-mcp/blob/main/.github/workflows/update-formula.yml)
@droot We have two options: - If you can help fork the official repo [`https://github.com/Homebrew/homebrew-cask`](https://github.com/Homebrew/homebrew-cask) to `GoogleCloudPlatform`, then the installation would be: ``` brew install --cask kubectl-ai ``` - If...
Plan B :) https://github.com/GoogleCloudPlatform/kubectl-ai/pull/150 @droot please take a look. Thanks!
With these two distribution channels available, Homebrew become less important. Close the issue for now. https://github.com/GoogleCloudPlatform/kubectl-ai/issues/181 https://github.com/GoogleCloudPlatform/kubectl-ai/pull/150
Step-in solution for now https://github.com/GoogleCloudPlatform/kubectl-ai/pull/209
@droot @janetkuo @justinsb Any technical complication you can think of? As of today, all providers except Gemini lack native streaming capabilities.
# K8s-bench Evaluation Results ## Model Performance Summary | Model | Success | Fail | |-------|---------|------| | gpt-4.1 | 2 | 1 | | **Total** | 2 | 1 |...
# K8s-bench Evaluation Results ## Model Performance Summary | Model | Success | Fail | |-------|---------|------| | gpt-4.1 | 3 | 0 | | o4-mini | 0 | 1 |...
> where kubectl-ai was installed singularly in a jumpbox and that jumbox would access n clusters and troubleshoot on them. As this is a more quite common scenario for security...