kubectl-ai
kubectl-ai copied to clipboard
Add Native Stream Support for OpenAI Provider
What happened
When you invoke kubectl-ai with the OpenAI provider, the CLI buffers the entire completion and only prints it once the model has finished.
What to expect
Native streaming should be the default for all OpenAI calls. Benefits include:
- Instant feedback: see partial answers, log snippets or troubleshooting steps as they’re generated
- Lower memory usage: avoid buffering large responses in memory
- Consistency: match the user experience of other providers (e.g. Gemini via Vertex AI)
Step-in solution for now https://github.com/GoogleCloudPlatform/kubectl-ai/pull/209
@droot @janetkuo @justinsb Any technical complication you can think of? As of today, all providers except Gemini lack native streaming capabilities.
#215 Solved this, give it a try