kubectl-ai icon indicating copy to clipboard operation
kubectl-ai copied to clipboard

code architecture outline

Open mikebz opened this issue 9 months ago • 4 comments

I tried to figure out where the extensibility points are for contribution. Some of them are obvious: you can add things that support an interface already defined such as: gollm/interfaces.go but overall the structure is a bit confusing.

Some subfolders contain full blown executables, and some are packages of the main kubectl-ai executables. I think that probably will preclude contributions from people who are not dedicated to the effort or have been with it from the start.

Suggestion is to create an overall structure where we can accept contributions or assign parts of the project to various owners.

mikebz avatar Apr 01 '25 02:04 mikebz

https://www.mermaidchart.com/raw/9c305bde-4554-413e-b691-a027fcf80ef3?theme=light&version=v0.1&format=svg

polya20 avatar Apr 26 '25 12:04 polya20

shell-like (zsh, bash etc) command history/editing would be great once it gets into the llm prompt i.e. >>>. Does this already work? If not suggest where to start in the code base to add it. Also, is there anyway to set the default model to -model gemini-2.5-pro-exp-03-25. Or set it to the default instead of erroring out like this on the first try.

reading streaming LLM response: Error 429, Message: Gemini 2.5 Pro Preview doesn't have a free quota tier. Please use Gemini 2.5 Pro Experimental (models/gemini-2.5-pro-exp-03-25) instead.

thanks.

lakamsani avatar Apr 28 '25 03:04 lakamsani

@lakamsani the interactive shell doesn't support history or command editing today. We might be able to use an off the shelf package that readline like functionality to address some of that. About code: start with the main.go and look at the terminal UI interface package, that should be a good starting point.

Thanks for pointing out that gemini 2.5 pro preview issue, I thought it was still on free tier, I will check. In the mean time, were you able to use with --model gemini-2.5-pro-exp-03-25 ?

droot avatar Apr 28 '25 12:04 droot

@droot yes, the model error is gone with -model gemini-2.5-pro-exp-03-25. But it is somewhat slow and getting occasional errors like this. I asked it to show cpu/memory usage of a few namespaces as a percentage of the total cluster capacity (comparing with what https://github.com/davidB/kubectl-view-allocations would do directly for example).

Error: reading streaming LLM response: iterateResponseStream: invalid stream chunk: {
  "error": {
    "code": 500,
    "message": "An internal error has occurred. Please retry or report in https://developers.generativeai.google/guide/troubleshooting",
    "status": "INTERNAL"
  }
}

lakamsani avatar Apr 28 '25 22:04 lakamsani

Fixed by https://github.com/GoogleCloudPlatform/kubectl-ai/pull/183 and https://github.com/GoogleCloudPlatform/kubectl-ai/pull/198, see https://github.com/GoogleCloudPlatform/kubectl-ai/blob/main/contributing.md#understand-the-repo

janetkuo avatar May 09 '25 22:05 janetkuo