Sunil Arora

Results 274 comments of Sunil Arora

It shouldn't need `GEMINI_API_KEY` if we are asking to use `ollama` llm-provider. Seems like a bug somewhere or the llm-provider argument is not being passed correctly. Looking at ```shell LLM...

> I went through the code of k8s-bench as the document is not fully-furnished. The flag is passed but probably a specific model should run with ollama. By default the...

> @droot i tried locally as close as possible to CI env. > > ``` > ./k8s-bench run --agent-bin=../kubectl-ai --llm-provider="ollama" --models="gemma3:1b" --task-pattern="create-pod" --output-dir=./results > Evaluating task: create-pod > > The...

We can try QAT (quantization aware models) [ollama](https://ollama.com/library/gemma3) 1. gemma3:4b-it-qat 2. gemma3:12b-it-qat

> Update: my mac is powerful enough to wrap up in 10min but GitHub runners took >40min... > > Relying on an external server such as Gemini would be the...

> As a suggestion, perhaps this work could be an independent workflow that can be triggered on specific events. Instead of push main, it can be on PR into release...

@rxinui I would like to try out the approach you suggested and see how it goes. Can you pl. enable this action to run periodically (may be every 4 hours)...

> @droot I've rebased and applied the correction! Something have gone wrong with the rebase.

@rxinui Quick update: I took some pieces from your PR and enabled periodic evals using vertexai https://github.com/GoogleCloudPlatform/kubectl-ai/pull/234 Here is the results of latest runs: https://github.com/GoogleCloudPlatform/kubectl-ai/actions/runs/15052158983 Thank you so much for...

Please go ahead. Thank you! On Thu, May 8, 2025, 3:16 PM Lefteris ***@***.***> wrote: > *LefterisXefteris* left a comment (GoogleCloudPlatform/kubectl-ai#181) > > > Hello i would love to work...