jan
jan copied to clipboard
bug: Edge cases with RAG for remote openai compatible models
Describe the bug
In case user turn on Retrieval
- User might choose their remote OpenAI LLM models, they might need to add embedding API endpoint as well, it will create some edge cases:
- If user uses Platform OpenAI, they can easily choose between
ada-002
orada-003
. They need to add key in order to send file. - If user uses Azure OpenAI, they need to create endpoints for embedding and add it to
openai
models (we have not supported this case). They need to add key in order to send file. - If user uses remote OpenAI compatible endpoint, they might only have
chat/completion
endpoint, no embedding endpoint- We might provide user with the choice with local embedding model in this case.
- If user uses Platform OpenAI, they can easily choose between
Steps to reproduce Steps to reproduce the behavior:
- Enable experimental in Settings
- Use OpenAI models
- Try not to add key
- See error
Expected behavior
- Clear definition and handler in UI/ Logic in these case
Screenshots If applicable, add screenshots to help explain your issue.
Environment details
- All, as it's in TS logic
Logs
Additional context Add any other context or information that could be helpful in diagnosing the problem.