k8sgpt icon indicating copy to clipboard operation
k8sgpt copied to clipboard

[Feature]: Support Custom AI backends.

Open atul86244 opened this issue 10 months ago • 5 comments

Checklist

  • [X] I've searched for similar issues and couldn't find anything matching
  • [X] I've discussed this feature request in the K8sGPT Slack and got positive feedback

Is this feature request related to a problem?

No

Problem Description

Please add support to use custom AI backends with k8sGPT. This would help people use k8sGPT along with in house AI backends leading to increase in adoption of k8sGPT.

Solution Description

Need the ability to use k8sGPT along with custom in house AI backends. For example - I want to use k8sGPT in my company and use the company AI solution as the AI backend for k8sGPT.

Benefits

This would help people use k8sGPT along with in house AI backends leading to increase in adoption of k8sGPT.

Potential Drawbacks

No response

Additional Information

No response

atul86244 avatar Apr 19 '24 11:04 atul86244

Hey @atul86244, We support OpenAI's API spec, are you having a different use-case in your mind ?

arbreezy avatar Apr 19 '24 21:04 arbreezy

Hi @arbreezy , thanks for your response. I was going through this doc https://docs.k8sgpt.ai/reference/providers/backend/ and was trying to figure out how can I point k8sGPT to my company's AI backend. If I have my own custom AI which exposes an endpoint then can I point k8sGPT to it?

I am not sure if the spec below provides a way to do that:

kubectl apply -f - << EOF
apiVersion: core.k8sgpt.ai/v1alpha1
kind: K8sGPT
metadata:
  name: k8sgpt-sample
  namespace: k8sgpt-operator-system
spec:
  ai:
    enabled: true
    model: gpt-3.5-turbo
    backend: openai
    secret:
      name: k8sgpt-sample-secret
      key: openai-api-key
    # anonymized: false
    # language: english
  noCache: false
  repository: ghcr.io/k8sgpt-ai/k8sgpt
  version: v0.3.8
  #integrations:
  # trivy:
  #  enabled: true
  #  namespace: trivy-system
  # filters:
  #   - Ingress
  # sink:
  #   type: slack
  #   webhook: <webhook-url> # use the sink secret if you want to keep your webhook url private
  #   secret:
  #     name: slack-webhook
  #     key: url
  #extraOptions:
  #   backstage:
  #     enabled: true
EOF

atul86244 avatar Apr 21 '24 06:04 atul86244

I am also interested in this. I have a custom API endpoint that supports openAI API spec but tinyllama nor localAI have auth tokens which my endpoint needs. Can we either add a custom baseURL field to openai provider or auth token field to localAI or tinyllama? Please correct me if this already exists.

Thanks!

boixu avatar Apr 26 '24 17:04 boixu

Hi Team, can you please help with this.

atul86244 avatar May 11 '24 06:05 atul86244

+1. Many of users or corporations host their various LLM over self-hosted API (i.e. AWS API Gateway, Kong API) via REST protocol regardless of the LLM models sitting behind. In this case, it will be a request call to the bakckend API

haofeif avatar Sep 04 '24 13:09 haofeif