jupyter-ai icon indicating copy to clipboard operation
jupyter-ai copied to clipboard

Self-hosted LLM support

Open Mrjaggu opened this issue 1 year ago • 4 comments
trafficstars

Problem

To access our own custom trained LLM model using privatae endpoint hosted on local env.

Mrjaggu avatar Mar 01 '24 09:03 Mrjaggu

Thank you for opening your first issue in this project! Engagement like this is essential for open source projects! :hugs:
If you haven't done so already, check out Jupyter's Code of Conduct. Also, please try to follow the issue template as it helps other other community members to contribute more effectively. welcome You can meet the other Jovyans by joining our Discourse forum. There is also an intro thread there where you can stop by and say Hi! :wave:
Welcome to the Jupyter community! :tada:

welcome[bot] avatar Mar 01 '24 09:03 welcome[bot]

@Mrjaggu Thank you opening this issue! This is already possible if the local LLM supports an "OpenAI-like" API. To do so, you should select any "OpenAI Chat" model, and set the "Base URL" field to localhost and your port number.

If this doesn't meet your use-case however, then please feel free to describe your problem in more detail. For example, what self-hosted LLM services are you trying to use?

dlqqq avatar Mar 04 '24 22:03 dlqqq

See #389 for existing discussion on using self-hosted LLMs through the strategy I just described.

dlqqq avatar Mar 04 '24 22:03 dlqqq

It is possible to use a internal LLM in the same network with token provided by MS Entra ? We have the following steps:

  1. Get the token - Authorization: https: //login .microsoftonline.com/< tenant id >/oauth2/v2.0/authorize?response_type=code&client_id=< client id >&scope=< api scope >&redirect_uri=<redirect_uri>

This returns: https://redirect_uri?code=< code >&session_state=< session_state >

Then:

https://login.microsoftonline.com//oauth2/v2.0/token
Request Body:
grant_type: “authorization_code”
code: “< code generated in the previous step >”
redirect_uri: “<redirect_uri>”
client_id: “<client_id>”
client_secret: “< client secret >”

Step 2 - Get App Context id

POST https://login.microsoftonline.com//oauth2/v2.0/token
Request Body
client_id: “< client id >”
scope: “< api scope >”
client_secret: “< client secret >”
grant_type: “client_credentials”
Response Body
{“token_type”:“Bearer”,“expires_in”:3599,“ext_expires_in”:3599,“access_token”:" < token >"}
  1. Send the message:
POST https://< LLM Server >/api/tryout/v1/public/gpt3/chats/messages
Request Body:
{“messages”:[{“role”:“user”,“content”:“good morning”}],“model”:“gpt3”,“temperature”:0.1}
Response Body:
[{“role”:“assistant”,“content”:“Good morning! How are you today?”,“tokenCount”:17,“tokenLimitExceeded”:false}]

How do I configure Jupyter-Ai assistant to work with that?

DanielCastroBosch avatar Jul 04 '24 18:07 DanielCastroBosch