jupyter-ai
jupyter-ai copied to clipboard
A generative AI extension for JupyterLab
### Problem Currently to test something involving interaction with language model or to test chat UI, real API key or locally hosted language model is required. This is not optimal...
## Description In the chat settings panel, when "SageMaker endpoint :: *" is selected as the language model, the default region name appears with "Region name (required)" superimposed on it....
### Problem Chat conversation memory window of 2 is a bit small especially as people are used to long memories from their experiences with ChatGPT. ### Proposed Solution * Make...
### Problem * I am unable to use ChatVertexAI as a Provider for chat as the final message ChatPromptTemplate is an AIMessage. https://github.com/jupyterlab/jupyter-ai/blob/0f619061430be6320bf9b87c63a1b939597998f9/packages/jupyter-ai/jupyter_ai/chat_handlers/default.py#L35-L52 * Langchain's ChatVertexAI requires that the last...
## Description The `/generate` command in the chat UI can generate a notebook with no code cells, or with empty code cells where code was expected. You can see this...
## Summary If an error occurs during a chat session for any reason, an error related to the indexing is always returned to the user, along with the exception stack...
### Problem When I send a prompt to a SageMaker endpoint `jumpstart-dft-hf-translation-t5-small`, which expects plain text, not JSON, as input, I get an error (see below). This is because Jupyter...
### Problem Each LLM provider and model has limits on the input size of prompts. When a user asks questions about indexed docs with `/ask`, the chunk size used to...
## Summary If a directory has been already indexed with a previous `/learn` command, and is submitted for learning again, this proceeds without any warning. However, when the `/ask` command...
### Problem Current prompts in Jupyter AI work well with remote providers, but are not optimized for locally installed models provided by GPT4All. See the discussion on #190 to some...