k8sgpt icon indicating copy to clipboard operation
k8sgpt copied to clipboard

[Feature]: Add a custom AI that can call rest API endpoint

Open lili-wan opened this issue 11 months ago • 4 comments

Checklist

  • [X] I've searched for similar issues and couldn't find anything matching
  • [X] I've discussed this feature request in the K8sGPT Slack and got positive feedback

Is this feature request related to a problem?

Yes

Problem Description

Our company developed another platform on top of openAI for compliance and security reason. And we can only use the internal API (similar API signature like openAI). Currently the openAI integration directly calls CreateChatCompletion as go-client code which does not work for our use case

Solution Description

Can we have a AI option that can call AI API as rest point? It will take the following parameters as configurable input from CLI/API

  1. AI endpoint
  2. Request payload body (json string)
  3. Authentication header

Benefits

this provide more flexibility on the client side to configure the rest endpoint directly

Potential Drawbacks

No response

Additional Information

No response

lili-wan avatar Feb 27 '24 23:02 lili-wan

Hi @lili-wan thanks for creating this issue. If you're using an openAI like API you can configure an alternative endpoint e.g.

bash k8sgpt auth new --backend localai --model <model_name> --baseurl http://localhost:8080/v1

Would this work for you? Or do you have a completely different API?

AlexsJones avatar Mar 08 '24 11:03 AlexsJones

Hi @lili-wan thanks for creating this issue. If you're using an openAI like API you can configure an alternative endpoint e.g.

bash k8sgpt auth new --backend localai --model <model_name> --baseurl http://localhost:8080/v1

Would this work for you? Or do you have a completely different API?

Is it necessary to provide a localai sample project for user to make integration with their self hosted llm? Just a suggestion. @AlexsJones

warjiang avatar Mar 12 '24 02:03 warjiang

Hi @lili-wan thanks for creating this issue. If you're using an openAI like API you can configure an alternative endpoint e.g.

bash k8sgpt auth new --backend localai --model <model_name> --baseurl http://localhost:8080/v1

Would this work for you? Or do you have a completely different API?

Is it necessary to provide a localai sample project for user to make integration with their self hosted llm? Just a suggestion. @AlexsJones

There are lots of tutorials on how to use k8sgpt/localAI. The question I have is regarding @lili-wan use case for a generic restful API.

AlexsJones avatar Mar 12 '24 06:03 AlexsJones

I've tried to setup the localai to point to a local endpoint made with huggingface tgi.

 k8sgpt auth update localai --model tgi --baseurl https://deepseek.k8scluster.ch/v1

but I get a

➜ k8sgpt analyze -b localai --explain
   0% |                                                                                                              | (0/17, 0 it/hr) [0s:0s]
Error: failed while calling AI provider localai: error, status code: 422, message:

Running a test query although works fine

curl https://deepseek.k8scluster.ch/v1/chat/completions \
    -X POST \
    -d '{
  "model": "tgi",
  "messages": [
    {
      "role": "system",
      "content": "You are a helpful assistant."
    },
    {
      "role": "user",
      "content": "What is deep learning?"
    }
  ],
  "stream": true,
  "max_tokens": 100
}' \
    -H 'Content-Type: application/json'

remmen-io avatar May 13 '24 09:05 remmen-io