kubectl-ai
kubectl-ai copied to clipboard
[Feature]: How to support streaming requests
How to support streaming requests? For example, in inference large models, streaming question answering can be enabled through stream:true. Can kubectl-ai be supported