xyy82888

Results 3 issues of xyy82888

How to support streaming requests? For example, in inference large models, streaming question answering can be enabled through stream:true. Can kubectl-ai be supported

enhancement

I only asked one question, and the result was that kubectl-ai.log was 2.1Mb in size. It's too outrageous. How can I turn off the log

**Environment (please complete the following):** - OS: TencentOS Server 3.1 (Final) - kubectl-ai version (run `kubectl-ai version`): 0.0.15 - LLM provider: openai - LLM model: qwen3-7b / DeepSeek-R1-Distill-Llama-8B **Describe the...

bug