jan
jan copied to clipboard
epic: Better Design abstraction for Remote AI (and Engineering abstraction)
Problem
- It's not clear to users they need to add an API key to chat with remote models.
- It's not clear to users the diff btw remote vs local model.
- Right now it just looks like its broken.
Success Criteria
- RightPanel should indicate this model is not set up yet. Let's try the label
API key needed
- We should somehow link the user to settings > models to set it up
Additional context Add any other context or screenshots about the feature request here.
Design (Deprecated)
https://www.figma.com/file/ytn1nRZ17FUmJHTlhmZB9f/Jan-App?type=design&node-id=1190-33531&mode=design&t=hHvTFu6BxvZx3oQh-4
Archive original comment from @0xSage
Problem
- It's not clear to users they need to add an API key to chat with remote models.
- It's not clear to users the diff btw remote vs local model.
- Right now it just looks like its broken.
Success Criteria
- RightPanel should indicate this model is not set up yet. Let's try the label
API key needed
- We should somehow link the user to settings > models to set it up
Additional context Add any other context or screenshots about the feature request here.
For review: (in settings): https://www.figma.com/file/ytn1nRZ17FUmJHTlhmZB9f/Jan-App?type=design&node-id=2160-191190&mode=design&t=frUQsREQGVbBH5uL-4 (in threads): https://www.figma.com/file/ytn1nRZ17FUmJHTlhmZB9f/Jan-App?type=design&node-id=1190-33531&mode=design&t=frUQsREQGVbBH5uL-4
I looked through the UI mockups and I can see there's one thing is missing - to allow user to specify it's own URL, for OpenAI-compatible models. Like for instance I do have a model running on my server via oobabooga (it provides OpenAI-compatible API out of the box) or via ollama+LiteLLM. And I want to be able to chat with that model from Jan - for that I need to be able to add a remotely inferenced model with OpenAI-compatible API on a custom URL.
I looked through the UI mockups and I can see there's one thing is missing - to allow user to specify it's own URL, for OpenAI-compatible models. Like for instance I do have a model running on my server via oobabooga (it provides OpenAI-compatible API out of the box) or via ollama+LiteLLM. And I want to be able to chat with that model from Jan - for that I need to be able to add a remotely inferenced model with OpenAI-compatible API on a custom URL.
Thanks for flagging this @kha84. I'll add this user story as well!
I am sunsetting this issue in favor for a more holistic redesign of the "Provider" abstraction:
https://www.notion.so/jan-ai/Provider-Abstraction-for-Local-and-Remote-AI-d54448ad5ce34cb2845d986870b9395e?pvs=4